03:30:40,924 AM [main] [INFO] StringValueObjectFactory - Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.inet.types.rev130715.Ipv4AddressNoZone 03:30:40,925 AM [main] [INFO] StringValueObjectFactory - Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.inet.types.rev130715.Ipv4Prefix 03:30:40,925 AM [main] [INFO] StringValueObjectFactory - Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.inet.types.rev130715.Ipv6AddressNoZone 03:30:40,926 AM [main] [INFO] StringValueObjectFactory - Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.inet.types.rev130715.Ipv6Prefix 03:30:42,30 AM [test-akka.actor.default-dispatcher-5] [INFO] Slf4jLogger - Slf4jLogger started 03:30:43,195 AM [test-akka.actor.default-dispatcher-5] [INFO] ArteryTransport - Remoting started with transport [Artery tcp]; listening on address [akka://test@127.0.0.1:2550] with UID [-2638966290294694629] 03:30:43,234 AM [test-akka.actor.default-dispatcher-5] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2550] - Starting up, Akka version [2.6.21] ... 03:30:43,401 AM [test-akka.actor.default-dispatcher-5] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2550] - Registered cluster JMX MBean [akka:type=Cluster] 03:30:43,401 AM [test-akka.actor.default-dispatcher-5] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2550] - Started up successfully 03:30:43,494 AM [test-akka.actor.default-dispatcher-10] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2550] - No downing-provider-class configured, manual cluster downing required, see https://doc.akka.io/docs/akka/current/typed/cluster.html#downing 03:30:43,495 AM [test-akka.actor.default-dispatcher-10] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2550] - No seed-nodes configured, manual cluster join required, see https://doc.akka.io/docs/akka/current/typed/cluster.html#joining 03:30:43,598 AM [test-akka.actor.default-dispatcher-5] [INFO] Slf4jLogger - Slf4jLogger started 03:30:43,646 AM [test-akka.actor.default-dispatcher-5] [INFO] ArteryTransport - Remoting started with transport [Artery tcp]; listening on address [akka://test@127.0.0.1:2552] with UID [-1060947396738902131] 03:30:43,648 AM [test-akka.actor.default-dispatcher-5] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2552] - Starting up, Akka version [2.6.21] ... 03:30:43,665 AM [test-akka.actor.default-dispatcher-5] [WARN] Cluster - Could not register Cluster JMX MBean with name=akka:type=Cluster as it is already registered. If you are running multiple clusters in the same JVM, set 'akka.cluster.jmx.multi-mbeans-in-same-jvm = on' in config 03:30:43,665 AM [test-akka.actor.default-dispatcher-5] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2552] - Started up successfully 03:30:43,704 AM [test-akka.actor.default-dispatcher-5] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2552] - No downing-provider-class configured, manual cluster downing required, see https://doc.akka.io/docs/akka/current/typed/cluster.html#downing 03:30:43,705 AM [test-akka.actor.default-dispatcher-5] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2552] - No seed-nodes configured, manual cluster join required, see https://doc.akka.io/docs/akka/current/typed/cluster.html#joining 03:30:43,764 AM [test-akka.actor.default-dispatcher-11] [DEBUG] NetconfNodeManagerTest$TestMasterActor - Actor created TestActor[akka://test/user/akka:test@127.0.0.1:2552_device] 03:30:43,790 AM [test-akka.actor.default-dispatcher-5] [INFO] EmptyLocalActorRef - Message [org.opendaylight.controller.cluster.common.actor.Monitor] from TestActor[akka://test/user/akka:test@127.0.0.1:2552_device] to Actor[akka://test/user/termination-monitor] was not delivered. [1] dead letters encountered. If this is not an expected behavior then Actor[akka://test/user/termination-monitor] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'. 03:30:43,934 AM [main] [DEBUG] NetconfNodeManager - RemoteDevice{device}: Operational state for node Uri{value=device} created: Node{nodeId=Uri{value=device}, augmentation=[NetconfNode{clusteredConnectionStatus=ClusteredConnectionStatus{netconfMasterNode=akka://test@127.0.0.1:2552}, connectionStatus=Connected, host=Host{ipAddress=IpAddress{ipv4Address=Ipv4Address{value=127.0.0.1}}}, port=PortNumber{value=9999}}]} 03:30:43,936 AM [main] [DEBUG] NetconfNodeManager - RemoteDevice{device}: Slave actor created with name Actor[akka://test/user/$a#-130973387] 03:30:43,938 AM [test-akka.actor.default-dispatcher-10] [DEBUG] NetconfNodeActor - Actor created Actor[akka://test/user/$a#-130973387] 03:30:43,939 AM [main] [DEBUG] NetconfNodeManager - RemoteDevice{device}: Sending AskForMasterMountPoint [slaveActorRef=Actor[akka://test/user/$a#-130973387]] message to master ActorSelection[Anchor(akka://test@127.0.0.1:2552/), Path(/user/akka:test@127.0.0.1:2552_device)] 03:30:43,940 AM [test-akka.actor.default-dispatcher-5] [INFO] EmptyLocalActorRef - Message [org.opendaylight.controller.cluster.common.actor.Monitor] from Actor[akka://test/user/$a#-130973387] to Actor[akka://test/user/termination-monitor] was not delivered. [1] dead letters encountered. If this is not an expected behavior then Actor[akka://test/user/termination-monitor] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'. 03:30:44,678 AM [test-akka.actor.default-dispatcher-11] [DEBUG] NetconfNodeManagerTest$TestMasterActor - RemoteDevice{device}: received message AskForMasterMountPoint [slaveActorRef=Actor[akka://test@127.0.0.1:2550/user/$a#-130973387]] 03:30:44,679 AM [test-akka.actor.default-dispatcher-11] [WARN] NetconfNodeManagerTest$TestMasterActor - RemoteDevice{device}: Received AskForMasterMountPoint [slaveActorRef=Actor[akka://test@127.0.0.1:2550/user/$a#-130973387]] but we don't appear to be the master 03:30:44,701 AM [test-akka.actor.default-dispatcher-11] [ERROR] NetconfNodeManager - RemoteDevice{device}: Failed to send message AskForMasterMountPoint [slaveActorRef=Actor[akka://test/user/$a#-130973387]] to ActorSelection[Anchor(akka://test@127.0.0.1:2552/), Path(/user/akka:test@127.0.0.1:2552_device)]. Slave mount point could not be created org.opendaylight.netconf.topology.singleton.messages.NotMasterException: Actor TestActor[akka://test/user/akka:test@127.0.0.1:2552_device] is not the current master at org.opendaylight.netconf.topology.singleton.impl.actors.NetconfNodeActor.handleReceive(NetconfNodeActor.java:145) at org.opendaylight.netconf.topology.singleton.impl.NetconfNodeManagerTest$TestMasterActor.handleReceive(NetconfNodeManagerTest.java:406) at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24) at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20) at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20) at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:270) at akka.actor.Actor.aroundReceive(Actor.scala:537) at akka.actor.Actor.aroundReceive$(Actor.scala:535) at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220) at akka.actor.ActorCell.receiveMessage(ActorCell.scala:579) at akka.actor.ActorCell.invoke(ActorCell.scala:547) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270) at akka.dispatch.Mailbox.run(Mailbox.scala:231) at akka.dispatch.Mailbox.exec(Mailbox.scala:243) at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373) at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182) at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655) at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622) at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165) 03:30:45,56 AM [test-akka.actor.default-dispatcher-11] [DEBUG] NetconfNodeManagerTest$TestMasterActor - RemoteDevice{device}: received message org.opendaylight.netconf.topology.singleton.messages.CreateInitialMasterActorData@26ad5c63 03:30:45,62 AM [test-akka.actor.default-dispatcher-11] [DEBUG] NetconfNodeManagerTest$TestMasterActor - RemoteDevice{device}: Master is ready. 03:30:45,65 AM [main] [DEBUG] NetconfNodeManager - RemoteDevice{device}: Operational state for node Uri{value=device} created: Node{nodeId=Uri{value=device}, augmentation=[NetconfNode{clusteredConnectionStatus=ClusteredConnectionStatus{netconfMasterNode=akka://test@127.0.0.1:2552}, connectionStatus=Connected, host=Host{ipAddress=IpAddress{ipv4Address=Ipv4Address{value=127.0.0.1}}}, port=PortNumber{value=9999}}]} 03:30:45,67 AM [test-akka.actor.default-dispatcher-11] [DEBUG] NetconfNodeActor - RemoteDevice{device}: received message org.opendaylight.netconf.topology.singleton.messages.RefreshSlaveActor@3eae5b9d 03:30:45,67 AM [main] [DEBUG] NetconfNodeManager - RemoteDevice{device}: Sending AskForMasterMountPoint [slaveActorRef=Actor[akka://test/user/$a#-130973387]] message to master ActorSelection[Anchor(akka://test@127.0.0.1:2552/), Path(/user/akka:test@127.0.0.1:2552_device)] 03:30:45,80 AM [test-akka.actor.default-dispatcher-5] [DEBUG] NetconfNodeManagerTest$TestMasterActor - RemoteDevice{device}: received message AskForMasterMountPoint [slaveActorRef=Actor[akka://test@127.0.0.1:2550/user/$a#-130973387]] 03:30:45,80 AM [test-akka.actor.default-dispatcher-5] [DEBUG] NetconfNodeManagerTest$TestMasterActor - RemoteDevice{device}: Sending RegisterMountPoint reply to Actor[akka://test@127.0.0.1:2550/user/$a#-130973387] 03:30:45,99 AM [test-akka.actor.default-dispatcher-11] [DEBUG] NetconfNodeActor - RemoteDevice{device}: received message RegisterMountPoint [allSourceIdentifiers=[SourceIdentifier [testID]], masterActorRef=Actor[akka://test@127.0.0.1:2552/user/akka:test@127.0.0.1:2552_device]] 03:30:45,115 AM [test-akka.actor.default-dispatcher-11] [INFO] SharedEffectiveModelContextFactory - Using weak references 03:30:45,139 AM [test-akka.actor.default-dispatcher-10] [DEBUG] NetconfNodeManager - RemoteDevice{device}: AskForMasterMountPoint [slaveActorRef=Actor[akka://test/user/$a#-130973387]] message to ActorSelection[Anchor(akka://test@127.0.0.1:2552/), Path(/user/akka:test@127.0.0.1:2552_device)] succeeded 03:30:46,157 AM [test-akka.actor.default-dispatcher-11] [WARN] NetconfNodeActor - RemoteDevice{device}: Failed to resolve schema context - retrying... org.opendaylight.yangtools.yang.model.repo.api.MissingSchemaSourceException: All available providers exhausted at org.opendaylight.yangtools.yang.model.repo.spi.AbstractSchemaRepository.lambda$fetchSource$0(AbstractSchemaRepository.java:73) at com.google.common.util.concurrent.AbstractCatchingFuture$AsyncCatchingFuture.doFallback(AbstractCatchingFuture.java:203) at com.google.common.util.concurrent.AbstractCatchingFuture$AsyncCatchingFuture.doFallback(AbstractCatchingFuture.java:190) at com.google.common.util.concurrent.AbstractCatchingFuture.run(AbstractCatchingFuture.java:133) at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:31) at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1270) at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:1038) at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:808) at com.google.common.util.concurrent.AbstractTransformFuture.run(AbstractTransformFuture.java:104) at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:31) at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1270) at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:1038) at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:808) at com.google.common.util.concurrent.AbstractCatchingFuture.run(AbstractCatchingFuture.java:135) at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:31) at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1270) at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:1038) at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:808) at com.google.common.util.concurrent.SettableFuture.setException(SettableFuture.java:55) at org.opendaylight.controller.cluster.schema.provider.impl.RemoteSchemaProvider$1.onComplete(RemoteSchemaProvider.java:54) at org.opendaylight.controller.cluster.schema.provider.impl.RemoteSchemaProvider$1.onComplete(RemoteSchemaProvider.java:46) at akka.dispatch.OnComplete.internal(Future.scala:299) at akka.dispatch.OnComplete.internal(Future.scala:297) at akka.dispatch.japi$CallbackBridge.apply(Future.scala:224) at akka.dispatch.japi$CallbackBridge.apply(Future.scala:221) at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:63) at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:100) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:100) at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:49) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:48) at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373) at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182) at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655) at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622) at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165) Caused by: org.opendaylight.yangtools.yang.model.repo.api.MissingSchemaSourceException: All available providers exhausted at org.opendaylight.yangtools.yang.model.repo.spi.AbstractSchemaRepository.lambda$fetchSource$0(AbstractSchemaRepository.java:73) at com.google.common.util.concurrent.AbstractCatchingFuture$AsyncCatchingFuture.doFallback(AbstractCatchingFuture.java:203) at com.google.common.util.concurrent.AbstractCatchingFuture$AsyncCatchingFuture.doFallback(AbstractCatchingFuture.java:190) at com.google.common.util.concurrent.AbstractCatchingFuture.run(AbstractCatchingFuture.java:133) ... 24 more Caused by: akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://test@127.0.0.1:2552/user/akka:test@127.0.0.1:2552_device]] after [1000 ms]. Message of type [org.opendaylight.netconf.topology.singleton.messages.YangTextSchemaSourceRequest]. A typical reason for `AskTimeoutException` is that the recipient actor didn't send a reply. 03:30:46,169 AM [test-akka.actor.default-dispatcher-5] [DEBUG] NetconfNodeManagerTest$TestMasterActor - RemoteDevice{device}: received message YangTextSchemaSourceRequest [sourceIdentifier=SourceIdentifier [testID]] 03:30:46,173 AM [test-akka.actor.default-dispatcher-5] [DEBUG] NetconfNodeManagerTest$TestMasterActor - RemoteDevice{device}: getSchemaSource for SourceIdentifier [testID] succeeded 03:30:46,213 AM [test-akka.actor.default-dispatcher-11] [INFO] NetconfNodeActor - RemoteDevice{device}: Schema context resolved: [ModuleEffectiveStatementImpl{argument=Unqualified{localName=testID}}] - registering slave mount point 03:30:46,219 AM [test-akka.actor.default-dispatcher-11] [INFO] SlaveSalFacade - RemoteDevice{device}: Slave mount point registered. 03:30:46,226 AM [main] [DEBUG] NetconfNodeManager - RemoteDevice{device}: Operational state for node Uri{value=device} created: Node{nodeId=Uri{value=device}, augmentation=[NetconfNode{clusteredConnectionStatus=ClusteredConnectionStatus{netconfMasterNode=akka://test@127.0.0.1:2552}, connectionStatus=Connected, host=Host{ipAddress=IpAddress{ipv4Address=Ipv4Address{value=127.0.0.1}}}, port=PortNumber{value=9999}}]} 03:30:46,227 AM [test-akka.actor.default-dispatcher-11] [DEBUG] NetconfNodeActor - RemoteDevice{device}: received message org.opendaylight.netconf.topology.singleton.messages.RefreshSlaveActor@460e92d5 03:30:46,227 AM [main] [DEBUG] NetconfNodeManager - RemoteDevice{device}: Sending AskForMasterMountPoint [slaveActorRef=Actor[akka://test/user/$a#-130973387]] message to master ActorSelection[Anchor(akka://test@127.0.0.1:2552/), Path(/user/akka:test@127.0.0.1:2552_device)] 03:30:47,245 AM [test-akka.actor.default-dispatcher-11] [WARN] NetconfNodeManager - RemoteDevice{device}: Failed to send message to ActorSelection[Anchor(akka://test@127.0.0.1:2552/), Path(/user/akka:test@127.0.0.1:2552_device)] - retrying... akka.pattern.AskTimeoutException: Ask timed out on [ActorSelection[Anchor(akka://test@127.0.0.1:2552/), Path(/user/akka:test@127.0.0.1:2552_device)]] after [1000 ms]. Message of type [org.opendaylight.netconf.topology.singleton.messages.AskForMasterMountPoint]. A typical reason for `AskTimeoutException` is that the recipient actor didn't send a reply. 03:30:47,255 AM [test-akka.actor.default-dispatcher-5] [DEBUG] NetconfNodeManagerTest$TestMasterActor - RemoteDevice{device}: received message AskForMasterMountPoint [slaveActorRef=Actor[akka://test@127.0.0.1:2550/user/$a#-130973387]] 03:30:47,255 AM [test-akka.actor.default-dispatcher-5] [DEBUG] NetconfNodeManagerTest$TestMasterActor - RemoteDevice{device}: Sending RegisterMountPoint reply to Actor[akka://test@127.0.0.1:2550/user/$a#-130973387] 03:30:47,264 AM [test-akka.actor.default-dispatcher-11] [DEBUG] NetconfNodeActor - RemoteDevice{device}: received message RegisterMountPoint [allSourceIdentifiers=[SourceIdentifier [testID]], masterActorRef=Actor[akka://test@127.0.0.1:2552/user/akka:test@127.0.0.1:2552_device]] 03:30:47,264 AM [test-akka.actor.default-dispatcher-11] [INFO] SlaveSalFacade - RemoteDevice{device}: Slave mount point unregistered. 03:30:47,266 AM [test-akka.actor.default-dispatcher-11] [INFO] NetconfNodeActor - RemoteDevice{device}: Schema context resolved: [ModuleEffectiveStatementImpl{argument=Unqualified{localName=testID}}] - registering slave mount point 03:30:47,268 AM [test-akka.actor.default-dispatcher-11] [INFO] SlaveSalFacade - RemoteDevice{device}: Slave mount point registered. 03:30:47,267 AM [test-akka.actor.default-dispatcher-10] [DEBUG] NetconfNodeManager - RemoteDevice{device}: AskForMasterMountPoint [slaveActorRef=Actor[akka://test/user/$a#-130973387]] message to ActorSelection[Anchor(akka://test@127.0.0.1:2552/), Path(/user/akka:test@127.0.0.1:2552_device)] succeeded 03:30:47,276 AM [main] [DEBUG] NetconfNodeManager - RemoteDevice{device}: Sending poison pill to Actor[akka://test/user/$a#-130973387] 03:30:47,278 AM [test-akka.actor.default-dispatcher-10] [INFO] SlaveSalFacade - RemoteDevice{device}: Slave mount point unregistered. 03:30:47,289 AM [test-akka.actor.default-dispatcher-10] [INFO] CoordinatedShutdown - Running CoordinatedShutdown with reason [ActorSystemTerminateReason] 03:30:47,301 AM [test-akka.actor.default-dispatcher-10] [INFO] LocalActorRef - Message [akka.cluster.ClusterUserAction$Leave] to Actor[akka://test/system/cluster/core/daemon#-497133192] was unhandled. [2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'. 03:30:47,307 AM [test-akka.actor.default-dispatcher-10] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2550] - Exiting completed 03:30:47,307 AM [test-akka.actor.default-dispatcher-10] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2550] - Shutting down... 03:30:47,307 AM [test-akka.actor.default-dispatcher-10] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2550] - Successfully shut down 03:30:47,316 AM [test-akka.actor.default-dispatcher-11] [INFO] RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon. 03:30:47,317 AM [test-akka.actor.default-dispatcher-11] [INFO] RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports. 03:30:47,360 AM [test-akka.actor.default-dispatcher-11] [WARN] Materializer - [outbound connection to [akka://test@127.0.0.1:2552], control stream] Upstream failed, cause: StreamTcpException: The connection has been aborted 03:30:47,365 AM [test-akka.actor.default-dispatcher-11] [WARN] Materializer - [outbound connection to [akka://test@127.0.0.1:2552], message stream] Upstream failed, cause: StreamTcpException: The connection has been aborted 03:30:47,376 AM [test-akka.actor.default-dispatcher-11] [INFO] RemoteActorRefProvider$RemotingTerminator - Remoting shut down. 03:30:47,411 AM [test-akka.actor.default-dispatcher-5] [INFO] CoordinatedShutdown - Running CoordinatedShutdown with reason [ActorSystemTerminateReason] 03:30:47,420 AM [test-akka.actor.default-dispatcher-5] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2552] - Exiting completed 03:30:47,420 AM [test-akka.actor.default-dispatcher-5] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2552] - Shutting down... 03:30:47,420 AM [test-akka.actor.default-dispatcher-5] [INFO] LocalActorRef - Message [akka.cluster.ClusterUserAction$Leave] to Actor[akka://test/system/cluster/core/daemon#1256263601] was unhandled. [2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'. 03:30:47,423 AM [test-akka.actor.default-dispatcher-11] [WARN] Cluster - Could not unregister Cluster JMX MBean with name=akka:type=Cluster as it was not found. If you are running multiple clusters in the same JVM, set 'akka.cluster.jmx.multi-mbeans-in-same-jvm = on' in config 03:30:47,423 AM [test-akka.actor.default-dispatcher-11] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2552] - Successfully shut down 03:30:47,427 AM [test-akka.actor.default-dispatcher-5] [INFO] RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon. 03:30:47,427 AM [test-akka.actor.default-dispatcher-5] [INFO] RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports. 03:30:48,447 AM [test-akka.actor.default-dispatcher-21] [INFO] RemoteActorRefProvider$RemotingTerminator - Remoting shut down. 03:30:48,493 AM [test-akka.actor.default-dispatcher-5] [INFO] Slf4jLogger - Slf4jLogger started 03:30:48,520 AM [test-akka.actor.default-dispatcher-5] [INFO] ArteryTransport - Remoting started with transport [Artery tcp]; listening on address [akka://test@127.0.0.1:2550] with UID [-5416289659882727560] 03:30:48,520 AM [test-akka.actor.default-dispatcher-5] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2550] - Starting up, Akka version [2.6.21] ... 03:30:48,525 AM [test-akka.actor.default-dispatcher-10] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2550] - Registered cluster JMX MBean [akka:type=Cluster] 03:30:48,526 AM [test-akka.actor.default-dispatcher-10] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2550] - Started up successfully 03:30:48,530 AM [test-akka.actor.default-dispatcher-5] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2550] - No downing-provider-class configured, manual cluster downing required, see https://doc.akka.io/docs/akka/current/typed/cluster.html#downing 03:30:48,530 AM [test-akka.actor.default-dispatcher-5] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2550] - No seed-nodes configured, manual cluster join required, see https://doc.akka.io/docs/akka/current/typed/cluster.html#joining 03:30:48,568 AM [test-akka.actor.default-dispatcher-5] [INFO] Slf4jLogger - Slf4jLogger started 03:30:48,614 AM [test-akka.actor.default-dispatcher-5] [INFO] ArteryTransport - Remoting started with transport [Artery tcp]; listening on address [akka://test@127.0.0.1:2552] with UID [5126558863986736120] 03:30:48,614 AM [test-akka.actor.default-dispatcher-5] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2552] - Starting up, Akka version [2.6.21] ... 03:30:48,619 AM [test-akka.actor.default-dispatcher-5] [WARN] Cluster - Could not register Cluster JMX MBean with name=akka:type=Cluster as it is already registered. If you are running multiple clusters in the same JVM, set 'akka.cluster.jmx.multi-mbeans-in-same-jvm = on' in config 03:30:48,620 AM [test-akka.actor.default-dispatcher-5] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2552] - Started up successfully 03:30:48,633 AM [test-akka.actor.default-dispatcher-5] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2552] - No downing-provider-class configured, manual cluster downing required, see https://doc.akka.io/docs/akka/current/typed/cluster.html#downing 03:30:48,633 AM [test-akka.actor.default-dispatcher-5] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2552] - No seed-nodes configured, manual cluster join required, see https://doc.akka.io/docs/akka/current/typed/cluster.html#joining 03:30:48,652 AM [test-akka.actor.default-dispatcher-5] [DEBUG] NetconfNodeManagerTest$TestMasterActor - Actor created TestActor[akka://test/user/akka:test@127.0.0.1:2552_device] 03:30:48,653 AM [test-akka.actor.default-dispatcher-10] [INFO] EmptyLocalActorRef - Message [org.opendaylight.controller.cluster.common.actor.Monitor] from TestActor[akka://test/user/akka:test@127.0.0.1:2552_device] to Actor[akka://test/user/termination-monitor] was not delivered. [1] dead letters encountered. If this is not an expected behavior then Actor[akka://test/user/termination-monitor] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'. 03:30:48,665 AM [test-akka.actor.default-dispatcher-11] [DEBUG] NetconfNodeManagerTest$TestMasterActor - RemoteDevice{device}: received message org.opendaylight.netconf.topology.singleton.messages.CreateInitialMasterActorData@3dfde67f 03:30:48,666 AM [test-akka.actor.default-dispatcher-11] [DEBUG] NetconfNodeManagerTest$TestMasterActor - RemoteDevice{device}: Master is ready. 03:30:48,726 AM [main] [DEBUG] NetconfNodeManager - RemoteDevice{device}: Registering data tree change listener on path KeyedInstanceIdentifier{targetType=interface org.opendaylight.yang.gen.v1.urn.tbd.params.xml.ns.yang.network.topology.rev131021.network.topology.topology.Node, path=[org.opendaylight.yang.gen.v1.urn.tbd.params.xml.ns.yang.network.topology.rev131021.NetworkTopology, org.opendaylight.yang.gen.v1.urn.tbd.params.xml.ns.yang.network.topology.rev131021.network.topology.Topology[key=TopologyKey{topologyId=Uri{value=topology-netconf}}], org.opendaylight.yang.gen.v1.urn.tbd.params.xml.ns.yang.network.topology.rev131021.network.topology.topology.Node[key=NodeKey{nodeId=Uri{value=device}}]]} 03:30:48,728 AM [main] [DEBUG] NetconfNodeManager - RemoteDevice{device}: Operational state for node Uri{value=device} created: Node{nodeId=Uri{value=device}, augmentation=[NetconfNode{clusteredConnectionStatus=ClusteredConnectionStatus{netconfMasterNode=akka://test@127.0.0.1:2552}, connectionStatus=Connected, host=Host{ipAddress=IpAddress{ipv4Address=Ipv4Address{value=127.0.0.1}}}, port=PortNumber{value=9999}}]} 03:30:48,728 AM [main] [DEBUG] NetconfNodeManager - RemoteDevice{device}: Slave actor created with name Actor[akka://test/user/$a#-736560479] 03:30:48,728 AM [main] [DEBUG] NetconfNodeManager - RemoteDevice{device}: Sending AskForMasterMountPoint [slaveActorRef=Actor[akka://test/user/$a#-736560479]] message to master ActorSelection[Anchor(akka://test@127.0.0.1:2552/), Path(/user/akka:test@127.0.0.1:2552_device)] 03:30:48,729 AM [test-akka.actor.default-dispatcher-5] [DEBUG] NetconfNodeActor - Actor created Actor[akka://test/user/$a#-736560479] 03:30:48,730 AM [test-akka.actor.default-dispatcher-5] [INFO] EmptyLocalActorRef - Message [org.opendaylight.controller.cluster.common.actor.Monitor] from Actor[akka://test/user/$a#-736560479] to Actor[akka://test/user/termination-monitor] was not delivered. [1] dead letters encountered. If this is not an expected behavior then Actor[akka://test/user/termination-monitor] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'. 03:30:48,835 AM [test-akka.actor.default-dispatcher-5] [DEBUG] NetconfNodeManagerTest$TestMasterActor - RemoteDevice{device}: received message AskForMasterMountPoint [slaveActorRef=Actor[akka://test@127.0.0.1:2550/user/$a#-736560479]] 03:30:48,835 AM [test-akka.actor.default-dispatcher-5] [DEBUG] NetconfNodeManagerTest$TestMasterActor - RemoteDevice{device}: Sending RegisterMountPoint reply to Actor[akka://test@127.0.0.1:2550/user/$a#-736560479] 03:30:48,841 AM [test-akka.actor.default-dispatcher-10] [DEBUG] NetconfNodeActor - RemoteDevice{device}: received message RegisterMountPoint [allSourceIdentifiers=[SourceIdentifier [testID]], masterActorRef=Actor[akka://test@127.0.0.1:2552/user/akka:test@127.0.0.1:2552_device]] 03:30:48,843 AM [test-akka.actor.default-dispatcher-5] [DEBUG] NetconfNodeManager - RemoteDevice{device}: AskForMasterMountPoint [slaveActorRef=Actor[akka://test/user/$a#-736560479]] message to ActorSelection[Anchor(akka://test@127.0.0.1:2552/), Path(/user/akka:test@127.0.0.1:2552_device)] succeeded 03:30:48,845 AM [test-akka.actor.default-dispatcher-5] [DEBUG] NetconfNodeManagerTest$TestMasterActor - RemoteDevice{device}: received message YangTextSchemaSourceRequest [sourceIdentifier=SourceIdentifier [testID]] 03:30:48,845 AM [test-akka.actor.default-dispatcher-5] [DEBUG] NetconfNodeManagerTest$TestMasterActor - RemoteDevice{device}: getSchemaSource for SourceIdentifier [testID] succeeded 03:30:48,859 AM [test-akka.actor.default-dispatcher-5] [INFO] NetconfNodeActor - RemoteDevice{device}: Schema context resolved: [ModuleEffectiveStatementImpl{argument=Unqualified{localName=testID}}] - registering slave mount point 03:30:48,860 AM [test-akka.actor.default-dispatcher-5] [INFO] SlaveSalFacade - RemoteDevice{device}: Slave mount point registered. 03:30:48,861 AM [main] [DEBUG] NetconfNodeManager - RemoteDevice{device}: Operational state for node Uri{value=device} deleted. 03:30:48,861 AM [main] [DEBUG] NetconfNodeManager - RemoteDevice{device}: Sending message to unregister slave mountpoint to Actor[akka://test/user/$a#-736560479] 03:30:48,861 AM [test-akka.actor.default-dispatcher-5] [DEBUG] NetconfNodeActor - RemoteDevice{device}: received message org.opendaylight.netconf.topology.singleton.messages.UnregisterSlaveMountPoint@480622ab 03:30:48,861 AM [test-akka.actor.default-dispatcher-5] [INFO] SlaveSalFacade - RemoteDevice{device}: Slave mount point unregistered. 03:30:48,873 AM [main] [DEBUG] NetconfNodeManager - RemoteDevice{device}: Operational state for node Uri{value=device} created: Node{nodeId=Uri{value=device}, augmentation=[NetconfNode{clusteredConnectionStatus=ClusteredConnectionStatus{netconfMasterNode=akka://test@127.0.0.1:2552}, connectionStatus=Connected, host=Host{ipAddress=IpAddress{ipv4Address=Ipv4Address{value=127.0.0.1}}}, port=PortNumber{value=9999}}]} 03:30:48,873 AM [main] [DEBUG] NetconfNodeManager - RemoteDevice{device}: Sending AskForMasterMountPoint [slaveActorRef=Actor[akka://test/user/$a#-736560479]] message to master ActorSelection[Anchor(akka://test@127.0.0.1:2552/), Path(/user/akka:test@127.0.0.1:2552_device)] 03:30:48,876 AM [test-akka.actor.default-dispatcher-10] [DEBUG] NetconfNodeActor - RemoteDevice{device}: received message org.opendaylight.netconf.topology.singleton.messages.RefreshSlaveActor@27e2e267 03:30:48,878 AM [test-akka.actor.default-dispatcher-5] [DEBUG] NetconfNodeManagerTest$TestMasterActor - RemoteDevice{device}: received message AskForMasterMountPoint [slaveActorRef=Actor[akka://test@127.0.0.1:2550/user/$a#-736560479]] 03:30:48,878 AM [test-akka.actor.default-dispatcher-5] [DEBUG] NetconfNodeManagerTest$TestMasterActor - RemoteDevice{device}: Sending RegisterMountPoint reply to Actor[akka://test@127.0.0.1:2550/user/$a#-736560479] 03:30:48,883 AM [test-akka.actor.default-dispatcher-10] [DEBUG] NetconfNodeActor - RemoteDevice{device}: received message RegisterMountPoint [allSourceIdentifiers=[SourceIdentifier [testID]], masterActorRef=Actor[akka://test@127.0.0.1:2552/user/akka:test@127.0.0.1:2552_device]] 03:30:48,883 AM [test-akka.actor.default-dispatcher-10] [INFO] NetconfNodeActor - RemoteDevice{device}: Schema context resolved: [ModuleEffectiveStatementImpl{argument=Unqualified{localName=testID}}] - registering slave mount point 03:30:48,884 AM [test-akka.actor.default-dispatcher-10] [INFO] SlaveSalFacade - RemoteDevice{device}: Slave mount point registered. 03:30:48,885 AM [test-akka.actor.default-dispatcher-10] [DEBUG] NetconfNodeManager - RemoteDevice{device}: AskForMasterMountPoint [slaveActorRef=Actor[akka://test/user/$a#-736560479]] message to ActorSelection[Anchor(akka://test@127.0.0.1:2552/), Path(/user/akka:test@127.0.0.1:2552_device)] succeeded 03:30:48,886 AM [main] [DEBUG] NetconfNodeManager - RemoteDevice{device}: Operational state for node Uri{value=device} updated from Node{nodeId=Uri{value=device}, augmentation=[NetconfNode{clusteredConnectionStatus=ClusteredConnectionStatus{netconfMasterNode=akka://test@127.0.0.1:2552}, connectionStatus=Connected, host=Host{ipAddress=IpAddress{ipv4Address=Ipv4Address{value=127.0.0.1}}}, port=PortNumber{value=9999}}]} to Node{nodeId=Uri{value=device}, augmentation=[NetconfNode{clusteredConnectionStatus=ClusteredConnectionStatus{netconfMasterNode=akka://test@127.0.0.1:2552}, connectionStatus=Connected, host=Host{ipAddress=IpAddress{ipv4Address=Ipv4Address{value=127.0.0.1}}}, port=PortNumber{value=9999}}]} 03:30:48,887 AM [main] [DEBUG] NetconfNodeManager - RemoteDevice{device}: Sending AskForMasterMountPoint [slaveActorRef=Actor[akka://test/user/$a#-736560479]] message to master ActorSelection[Anchor(akka://test@127.0.0.1:2552/), Path(/user/akka:test@127.0.0.1:2552_device)] 03:30:48,887 AM [test-akka.actor.default-dispatcher-10] [DEBUG] NetconfNodeActor - RemoteDevice{device}: received message org.opendaylight.netconf.topology.singleton.messages.RefreshSlaveActor@62e068ff 03:30:48,891 AM [test-akka.actor.default-dispatcher-5] [DEBUG] NetconfNodeManagerTest$TestMasterActor - RemoteDevice{device}: received message AskForMasterMountPoint [slaveActorRef=Actor[akka://test@127.0.0.1:2550/user/$a#-736560479]] 03:30:48,891 AM [test-akka.actor.default-dispatcher-5] [DEBUG] NetconfNodeManagerTest$TestMasterActor - RemoteDevice{device}: Sending RegisterMountPoint reply to Actor[akka://test@127.0.0.1:2550/user/$a#-736560479] 03:30:48,898 AM [test-akka.actor.default-dispatcher-5] [DEBUG] NetconfNodeActor - RemoteDevice{device}: received message RegisterMountPoint [allSourceIdentifiers=[SourceIdentifier [testID]], masterActorRef=Actor[akka://test@127.0.0.1:2552/user/akka:test@127.0.0.1:2552_device]] 03:30:48,899 AM [test-akka.actor.default-dispatcher-5] [INFO] SlaveSalFacade - RemoteDevice{device}: Slave mount point unregistered. 03:30:48,899 AM [test-akka.actor.default-dispatcher-5] [INFO] NetconfNodeActor - RemoteDevice{device}: Schema context resolved: [ModuleEffectiveStatementImpl{argument=Unqualified{localName=testID}}] - registering slave mount point 03:30:48,900 AM [test-akka.actor.default-dispatcher-5] [INFO] SlaveSalFacade - RemoteDevice{device}: Slave mount point registered. 03:30:48,900 AM [test-akka.actor.default-dispatcher-10] [DEBUG] NetconfNodeManager - RemoteDevice{device}: AskForMasterMountPoint [slaveActorRef=Actor[akka://test/user/$a#-736560479]] message to ActorSelection[Anchor(akka://test@127.0.0.1:2552/), Path(/user/akka:test@127.0.0.1:2552_device)] succeeded 03:30:48,909 AM [main] [DEBUG] NetconfNodeManager - RemoteDevice{device}: Operational state for node Uri{value=device} - subtree modified from Node{nodeId=Uri{value=device}, augmentation=[NetconfNode{clusteredConnectionStatus=ClusteredConnectionStatus{netconfMasterNode=akka://test@127.0.0.1:2552}, connectionStatus=Connected, host=Host{ipAddress=IpAddress{ipv4Address=Ipv4Address{value=127.0.0.1}}}, port=PortNumber{value=9999}}]} to Node{nodeId=Uri{value=device}, augmentation=[NetconfNode{clusteredConnectionStatus=ClusteredConnectionStatus{netconfMasterNode=akka://test@127.0.0.1:2552}, connectionStatus=UnableToConnect, host=Host{ipAddress=IpAddress{ipv4Address=Ipv4Address{value=127.0.0.1}}}, port=PortNumber{value=9999}}]} 03:30:48,909 AM [main] [DEBUG] NetconfNodeManager - RemoteDevice{device}: Sending message to unregister slave mountpoint to Actor[akka://test/user/$a#-736560479] 03:30:48,910 AM [test-akka.actor.default-dispatcher-10] [DEBUG] NetconfNodeActor - RemoteDevice{device}: received message org.opendaylight.netconf.topology.singleton.messages.UnregisterSlaveMountPoint@1bcb0785 03:30:48,910 AM [test-akka.actor.default-dispatcher-10] [INFO] SlaveSalFacade - RemoteDevice{device}: Slave mount point unregistered. 03:30:48,920 AM [main] [DEBUG] NetconfNodeManager - RemoteDevice{device}: Sending poison pill to Actor[akka://test/user/$a#-736560479] 03:30:48,921 AM [test-akka.actor.default-dispatcher-10] [INFO] CoordinatedShutdown - Running CoordinatedShutdown with reason [ActorSystemTerminateReason] 03:30:48,924 AM [test-akka.actor.default-dispatcher-10] [INFO] LocalActorRef - Message [akka.cluster.ClusterUserAction$Leave] to Actor[akka://test/system/cluster/core/daemon#697850765] was unhandled. [2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'. 03:30:48,926 AM [test-akka.actor.default-dispatcher-10] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2550] - Exiting completed 03:30:48,927 AM [test-akka.actor.default-dispatcher-10] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2550] - Shutting down... 03:30:48,927 AM [test-akka.actor.default-dispatcher-10] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2550] - Successfully shut down 03:30:48,932 AM [test-akka.actor.default-dispatcher-10] [INFO] RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon. 03:30:48,932 AM [test-akka.actor.default-dispatcher-10] [INFO] RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports. 03:30:48,948 AM [test-akka.actor.default-dispatcher-5] [WARN] Materializer - [outbound connection to [akka://test@127.0.0.1:2552], control stream] Upstream failed, cause: StreamTcpException: The connection has been aborted 03:30:48,948 AM [test-akka.actor.default-dispatcher-5] [WARN] Materializer - [outbound connection to [akka://test@127.0.0.1:2552], message stream] Upstream failed, cause: StreamTcpException: The connection has been aborted 03:30:48,953 AM [test-akka.actor.default-dispatcher-5] [INFO] RemoteActorRefProvider$RemotingTerminator - Remoting shut down. 03:30:48,968 AM [test-akka.actor.default-dispatcher-5] [INFO] CoordinatedShutdown - Running CoordinatedShutdown with reason [ActorSystemTerminateReason] 03:30:48,971 AM [test-akka.actor.default-dispatcher-5] [INFO] LocalActorRef - Message [akka.cluster.ClusterUserAction$Leave] to Actor[akka://test/system/cluster/core/daemon#-1285445902] was unhandled. [2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'. 03:30:48,972 AM [test-akka.actor.default-dispatcher-11] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2552] - Exiting completed 03:30:48,972 AM [test-akka.actor.default-dispatcher-11] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2552] - Shutting down... 03:30:48,972 AM [test-akka.actor.default-dispatcher-11] [WARN] Cluster - Could not unregister Cluster JMX MBean with name=akka:type=Cluster as it was not found. If you are running multiple clusters in the same JVM, set 'akka.cluster.jmx.multi-mbeans-in-same-jvm = on' in config 03:30:48,972 AM [test-akka.actor.default-dispatcher-11] [INFO] Cluster - Cluster Node [akka://test@127.0.0.1:2552] - Successfully shut down 03:30:48,976 AM [test-akka.actor.default-dispatcher-5] [INFO] RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon. 03:30:48,976 AM [test-akka.actor.default-dispatcher-5] [INFO] RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports. 03:30:49,994 AM [test-akka.actor.default-dispatcher-5] [INFO] RemoteActorRefProvider$RemotingTerminator - Remoting shut down.