I'm running a Spring Boot (1.3.5) console application with an embedded ActiveMQ server (5.10.0), which works just fine for receiving messages. However, I'm having trouble shutting down the application without exceptions.
This exception is thrown once for each queue, after hitting Ctrl-C:
2016-09-21 15:46:36.561 ERROR 18275 --- [update]] o.apache.activemq.broker.BrokerService : Failed to start Apache ActiveMQ ([my-mq-server, null], {})
java.lang.IllegalStateException: Shutdown in progress
at java.lang.ApplicationShutdownHooks.add(ApplicationShutdownHooks.java:66)
at java.lang.Runtime.addShutdownHook(Runtime.java:211)
at org.apache.activemq.broker.BrokerService.addShutdownHook(BrokerService.java:2446)
at org.apache.activemq.broker.BrokerService.doStartBroker(BrokerService.java:693)
at org.apache.activemq.broker.BrokerService.startBroker(BrokerService.java:684)
at org.apache.activemq.broker.BrokerService.start(BrokerService.java:605)
at org.apache.activemq.transport.vm.VMTransportFactory.doCompositeConnect(VMTransportFactory.java:127)
at org.apache.activemq.transport.vm.VMTransportFactory.doConnect(VMTransportFactory.java:56)
at org.apache.activemq.transport.TransportFactory.connect(TransportFactory.java:65)
at org.apache.activemq.ActiveMQConnectionFactory.createTransport(ActiveMQConnectionFactory.java:314)
at org.apache.activemq.ActiveMQConnectionFactory.createActiveMQConnection(ActiveMQConnectionFactory.java:329)
at org.apache.activemq.ActiveMQConnectionFactory.createActiveMQConnection(ActiveMQConnectionFactory.java:302)
at org.apache.activemq.ActiveMQConnectionFactory.createConnection(ActiveMQConnectionFactory.java:242)
at org.apache.activemq.jms.pool.PooledConnectionFactory.createConnection(PooledConnectionFactory.java:283)
at org.apache.activemq.jms.pool.PooledConnectionFactory$1.makeObject(PooledConnectionFactory.java:96)
at org.apache.activemq.jms.pool.PooledConnectionFactory$1.makeObject(PooledConnectionFactory.java:93)
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.create(GenericKeyedObjectPool.java:1041)
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:357)
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:279)
at org.apache.activemq.jms.pool.PooledConnectionFactory.createConnection(PooledConnectionFactory.java:243)
at org.apache.activemq.jms.pool.PooledConnectionFactory.createConnection(PooledConnectionFactory.java:212)
at org.springframework.jms.support.JmsAccessor.createConnection(JmsAccessor.java:180)
at org.springframework.jms.listener.AbstractJmsListeningContainer.createSharedConnection(AbstractJmsListeningContainer.java:413)
at org.springframework.jms.listener.AbstractJmsListeningContainer.refreshSharedConnection(AbstractJmsListeningContainer.java:398)
at org.springframework.jms.listener.DefaultMessageListenerContainer.refreshConnectionUntilSuccessful(DefaultMessageListenerContainer.java:925)
at org.springframework.jms.listener.DefaultMessageListenerContainer.recoverAfterListenerSetupFailure(DefaultMessageListenerContainer.java:899)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:1075)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2016-09-21 15:46:36.564 INFO 18275 --- [update]] o.apache.activemq.broker.BrokerService : Apache ActiveMQ 5.12.3 (my-mq-server, null) is shutting down
It seems as the DefaultMessageListenerContainer tries to start an ActiveMQ server which doesn't make sense to me. I've set the phase of the BrokerService to Integer.MAX_INT - 1 and the phase of DefaultJmsListeningContainerFactory to Integer.MAX_INT to make it go away before the ActiveMQ server is stopped.
I have this in my main():
public static void main(String[] args) {
final ConfigurableApplicationContext context = SpringApplication.run(SiteServer.class, args);
context.registerShutdownHook();
}
I've tried setting daemon to true as suggested here: Properly Shutting Down ActiveMQ and Spring DefaultMessageListenerContainer.
Any ideas? Thanks! =)
Found it. This problem occurs when the Camel context is shutdown after the BrokerService. Adding proper life cycle management so that Camel is shutdown before resolved the issue. Now everything is shutdown in a clean way without errors.
Related
I am running an event in Akka actor system, where we run multiple actors to query mongo db and retrieve data. Each actor queries for 1000 documents (each document's size is 9kb)
When running an event that is required to fire 14 actors to query for Mongo DB to retrieve 13000 documents.Once I experienced below exception, not sure why? Have anyone experienced this before?
2020-04-14 19:17:28,818 [erp-writer-actor-system-akka.actor.default-dispatcher-378] ERROR c.a.s.c.m.GlobalContextMongoClientService- 76cd7a80-83ef-4389-885a-be9caed77449 - Exception occured while reading data from cursor
java.lang.IllegalStateException: state should be: open
at com.mongodb.assertions.Assertions.isTrue(Assertions.java:70)
at com.mongodb.connection.DefaultServer.getConnection(DefaultServer.java:84)
at com.mongodb.binding.ClusterBinding$ClusterBindingConnectionSource.getConnection(ClusterBinding.java:86)
at com.mongodb.operation.QueryBatchCursor.getMore(QueryBatchCursor.java:203)
at com.mongodb.operation.QueryBatchCursor.hasNext(QueryBatchCursor.java:103)
at com.mongodb.MongoBatchCursorAdapter.hasNext(MongoBatchCursorAdapter.java:46)
at com.xyz.smartconnect.commons.mongoclient.GlobalContextMongoClientService.findWorkers(GlobalContextMongoClientService.java:145)
at com.xyz.smartconnect.actors.QueryWorkersActor.lambda$createReceive$0(QueryWorkersActor.java:40)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
at akka.actor.Actor$class.aroundReceive(Actor.scala:513)
at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:132)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:519)
at akka.actor.ActorCell.invoke(ActorCell.scala:488)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
at akka.dispatch.Mailbox.run(Mailbox.scala:224)
at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Suppressed: java.lang. IllegalStateException: state should be: open
at com.mongodb.assertions.Assertions.isTrue(Assertions.java:70)
at com.mongodb.connection.DefaultServer.getConnection(DefaultServer.java:84)
at com.mongodb.binding.ClusterBinding$ClusterBindingConnectionSource.getConnection(ClusterBinding.java:86)
at com.mongodb.operation.QueryBatchCursor.killCursor(QueryBatchCursor.java:261)
at com.mongodb.operation.QueryBatchCursor.close(QueryBatchCursor.java:147)
at com.mongodb.MongoBatchCursorAdapter.close(MongoBatchCursorAdapter.java:41)
at com.xyz.smartconnect.commons.mongoclient.GlobalContextMongoClientService.findWorkers(GlobalContextMongoClientService.java:149)
After running multiple tests and analyzing the logs carefully, I found the root cause. Below are the details.
While the application is using cursor to query data from mongoDb, connection has been released/closed. 'State should be : open' is complaining about a released connection.
In my case, my application experienced OutOfMemory, which caused disposing beans and releasing connections. Here is timeline of log events for this issue.
Since this is a memory issue for my case, fixing memory issue will fix below exception for me.
2020-04-19 12:57:32,981 [xyz-actor-system-akka.actor.default-dispatcher-72] ERROR a.a.ActorSystemImpl- - 413f9298-ca92-4744-913b-59934e4ce831 - exception on LARS’ timer thread
java.lang.OutOfMemoryError: GC overhead limit exceeded
at akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:269)
at akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:235)
at java.lang.Thread.run(Thread.java:748)
2020-04-19 12:57:43,649 [Thread-19] INFO o.s.c.s.DefaultLifecycleProcessor- - - Stopping beans in phase 2147483647
2020-04-19 12:58:13,483 [Thread-19] INFO o.s.j.e.a.AnnotationMBeanExporter- - - Unregistering JMX-exposed beans on shutdown
2020-04-19 12:58:45,186 [localhost-startStop-2] INFO c.a.s.ApplicationContextListener- - - >>>>>>>>> Disposing beans
2020-04-19 12:59:00,182 [localhost-startStop-2] INFO c.a.s.c.SpringBeanDisposer- - - Mongo connections are released.
2020-04-19 12:59:09,591 [xyz-actor-system-akka.actor.default-dispatcher-73] ERROR c.a.s.c.m.GlobalContextMongoClientService- - 413f9298-ca92-4744-913b-59934e4ce831 - Exception occured while reading data from cursor
java.lang.IllegalStateException: state should be: open
at com.mongodb.assertions.Assertions.isTrue(Assertions.java:70)
at com.mongodb.connection.DefaultServer.getDescription(DefaultServer.java:114)
at com.mongodb.binding.ClusterBinding$ClusterBindingConnectionSource.getServerDescription(ClusterBinding.java:81)
at com.mongodb.operation.QueryBatchCursor.initFromCommandResult(QueryBatchCursor.java:251)
at com.mongodb.operation.QueryBatchCursor.getMore(QueryBatchCursor.java:207)
at com.mongodb.operation.QueryBatchCursor.hasNext(QueryBatchCursor.java:103)
at com.mongodb.MongoBatchCursorAdapter.hasNext(MongoBatchCursorAdapter.java:46)
I'm unable to attach a #SqsListener in a spring boot application. It throws AWS.SimpleQueueService.NonExistentQueue exception.
I've gone through the question: Specified queue does not exist
and as far as I know, all the configurations are correct.
#Component
public class SQSListenerImpl{
#SqsListener(value = Constants.SQS_REQUEST_QUEUE, deletionPolicy = SqsMessageDeletionPolicy.NEVER)
public void listen(String taskJson, Acknowledgment acknowledgment, #Headers Map<String, String> headers) throws ExecutionException, InterruptedException {
//stuff
}
#PostConstruct
private void init(){
final AmazonSQS sqs = AmazonSQSClientBuilder.defaultClient();
LOGGER.info("Listing all queues in your account.\n");
for (final String queueUrl : sqs.listQueues().getQueueUrls()) {
LOGGER.info(" QueueUrl: " + queueUrl);
}
}
}
application.properties
cloud.aws.stack.auto=false
cloud.aws.region.static=ap-southeast-1
logging.level.root=INFO
Logs from above code:
[requestId: MainThread] [INFO] [SQSListenerImpl] [main] QueueUrl: https://sqs.ap-southeast-1.amazonaws.com/xxxxx/hello-world
[requestId: MainThread] [INFO] [SQSListenerImpl] [main] QueueUrl: https://sqs.ap-southeast-1.amazonaws.com/xxxxx/some-name2
[requestId: MainThread] [INFO] [SQSListenerImpl] [main] QueueUrl: https://sqs.ap-southeast-1.amazonaws.com/xxxxx/some-name3
[requestId: MainThread] [WARN] [SimpleMessageListenerContainer] [main] Ignoring queue with name 'hello-world': The queue does not exist.; nested exception is com.amazonaws.services.sqs.model.QueueDoesNotExistException: The specified queue does not exist for this wsdl version. (Service: AmazonSQS; Status Code: 400; Error Code: AWS.SimpleQueueService.NonExistentQueue; Request ID: 3c0108aa-7611-528f-ac69-5eb01fasb9f3)
[requestId: MainThread] [INFO] [Http11NioProtocol] [main] Starting ProtocolHandler ["http-nio-8080"]
[requestId: MainThread] [INFO] [TomcatWebServer] [main] Tomcat started on port(s): 8080 (http) with context path ''
[requestId: MainThread] [INFO] [Startup] [main] Started Startup in 11.391 seconds (JVM running for 12.426)
Aws credentials used are under ~/.aws/ directory.
Now my question is, if sqs.listQueues() can see the queue then why can't #SqsListener? Am I missing something or doing something wrong?
I tried with SpringBoot Aws clound like you and got same error.
Then i used the full http url as queue name and got access denied error
#SqsListener(value = "https://sqs.ap-southeast-1.amazonaws.com/xxxxx/hello-world")
So in the end, i end up using AWS SDK directly to get message from SQS
Here's what I'm doing with Spring Cloud.
Using SPEL I'm attaching a value from my application.properties to the Annotation #SqsListener like this
#SqsListener(value = "#{queueConfiguration.getQueue()}", deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS)
One thing to note, make sure you use the full HTTPS path for the queue.
For all local development, I'm using "localstack" and using a local implementation of SQS but the same code applies as it gets deploy in ECS. The other piece to note is that the role or instance needs to be able to Receive Messages via IAM to make this happen.
Using full URL worked for me.
#SqsListener("https://sqs.ap-south-1.amazonaws.com/xxxxxxxxx/queue-name")
Using below code work for me
#SqsListener(value="https://sqs.us-west-1.amazonaws.com/xxxxx/queue-name")
When a certain event arrives, I stop my rabbitmq listner using container.stop(); and after the needed job is done, I restart it with container.start(), but when a new event arrives, I get following error:
Exception in thread "SimpleAsyncTaskExecutor-1" 2016-04-22T16:20:53.646 WARN 15336 --- [cTaskExecutor-1] o.s.a.r.l.SimpleMessageListenerContainer : Consumer raised exception, processing can restart if the connection factory supports it
java.lang.NullPointerException: null
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.isActive(SimpleMessageListenerContainer.java:756) ~[spring-rabbit-1.4.3.RELEASE.jar:na]
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.access$600(SimpleMessageListenerContainer.java:82) ~[spring-rabbit-1.4.3.RELEASE.jar:na]
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1100) ~[spring-rabbit-1.4.3.RELEASE.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
It's actually a harmless (but scary) log messsage, but it is fixed in 1.5.3.
As mentioned in this answer:
It's generally better to stop the container on a separate thread.
because the container can't fully stop until the listener exits.
In my application Java spark context is created with an unavailable master URL (you may assume master is down for a maintenance). When creating Java spark context it leads to stopping JVM that runs spark driver with JVM exit code 50.
When I checked the logs I found SparkUncaughtExceptionHandler calling the System.exit. My program should run forever. How should I overcome this issue ?
I tried this scenario in spark version 1.4.1 and 1.6.0
My code is given below
package test.mains;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;
public class CheckJavaSparkContext {
/**
* #param args the command line arguments
*/
public static void main(String[] args) {
SparkConf conf = new SparkConf();
conf.setAppName("test");
conf.setMaster("spark://sunshine:7077");
try {
new JavaSparkContext(conf);
} catch (Throwable e) {
System.out.println("Caught an exception : " + e.getMessage());
//e.printStackTrace();
}
System.out.println("Waiting to complete...");
while (true) {
}
}
}
Part of the output log
16/03/04 18:02:24 INFO SparkDeploySchedulerBackend: Shutting down all executors
16/03/04 18:02:24 INFO SparkDeploySchedulerBackend: Asking each executor to shut down
16/03/04 18:02:24 WARN AppClient$ClientEndpoint: Drop UnregisterApplication(null) because has not yet connected to master
16/03/04 18:02:24 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[appclient-registration-retry-thread,5,main]
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1039)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:208)
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:107)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at org.apache.spark.deploy.client.AppClient.stop(AppClient.scala:290)
at org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend.org$apache$spark$scheduler$cluster$SparkDeploySchedulerBackend$$stop(SparkDeploySchedulerBackend.scala:198)
at org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend.stop(SparkDeploySchedulerBackend.scala:101)
at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:446)
at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1582)
at org.apache.spark.SparkContext$$anonfun$stop$7.apply$mcV$sp(SparkContext.scala:1731)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1229)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1730)
at org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend.dead(SparkDeploySchedulerBackend.scala:127)
at org.apache.spark.deploy.client.AppClient$ClientEndpoint.markDead(AppClient.scala:264)
at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2$$anonfun$run$1.apply$mcV$sp(AppClient.scala:134)
at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1163)
at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2.run(AppClient.scala:129)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
16/03/04 18:02:24 INFO DiskBlockManager: Shutdown hook called
16/03/04 18:02:24 INFO ShutdownHookManager: Shutdown hook called
16/03/04 18:02:24 INFO ShutdownHookManager: Deleting directory /tmp/spark-ea68a0fa-4f0d-4dbb-8407-cce90ef78a52
16/03/04 18:02:24 INFO ShutdownHookManager: Deleting directory /tmp/spark-ea68a0fa-4f0d-4dbb-8407-cce90ef78a52/userFiles-db548748-a55c-4406-adcb-c09e63b118bd
Java Result: 50
If application master is down application by itself will try to connect to the master three times with 20 second timeout. It looks like these parameters are hardcoded and not configurable. If application fails to connect there is nothing more you can do than to try to resubmit your application once it is up again.
That is why you should configure your cluster in a high availability mode. Spark Standalone supports two different modes:
Single-Node Recovery with Local File System
Standby Masters with ZooKeeper
where the second option should be applicable in production and useful in the described scenario.
My application is running for months and working very well. Then suddenly I get the following error:
com.hazelcast.core.HazelcastInstanceNotActiveException: Hazelcast instance is not active!
at com.hazelcast.spi.impl.ProxyServiceImpl$ProxyRegistry.<init>(ProxyServiceImpl.java:220)
at com.hazelcast.spi.impl.ProxyServiceImpl$ProxyRegistry.<init>(ProxyServiceImpl.java:207)
at com.hazelcast.spi.impl.ProxyServiceImpl$1.createNew(ProxyServiceImpl.java:69)
at com.hazelcast.spi.impl.ProxyServiceImpl$1.createNew(ProxyServiceImpl.java:67)
at com.hazelcast.util.ConcurrencyUtil.getOrPutIfAbsent(ConcurrencyUtil.java:47)
at com.hazelcast.spi.impl.ProxyServiceImpl.getDistributedObject(ProxyServiceImpl.java:101)
at com.hazelcast.instance.HazelcastInstanceImpl.getDistributedObject(HazelcastInstanceImpl.java:285)
at com.hazelcast.instance.HazelcastInstanceImpl.getLock(HazelcastInstanceImpl.java:183)
at com.hazelcast.instance.HazelcastInstanceProxy.getLock(HazelcastInstanceProxy.java:77)
at br.com.xyz.lock.hazelcast.HazelcastLockManager.lock(HazelcastLockManager.java:37)
at br.com.xyz.lock.hazelcast.LockManagerFacade.lock(LockManagerFacade.java:24)
at br.com.xyz.recebe.negocio.NProcessadorMensagemRecebida.processamentoLock(NProcessadorMensagemRecebida.java:85)
at br.com.xyz.recebe.negocio.NProcessadorMensagemRecebida.processaArquivo(NProcessadorMensagemRecebida.java:74)
at br.com.xyz.recebe.processador.ProcessadorBase.processaArquivo(ProcessadorBase.java:75)
at br.com.xyz.recebe.processador.ProcessadorXml.processaArquivo(ProcessadorXml.java:16)
at br.com.xyz.recebe.processador.ProcessadorFacade.processaArquivo(ProcessadorFacade.java:34)
at br.com.xyz.recebe.mail.pdes.ProcessadorPDESMeRecebida.processar(ProcessadorPDESMeRecebida.java:77)
at gov.sefaz.util.pdes.ProcessadorDiretorioEntradaSaidaDaemon.processar(ProcessadorDiretorioEntradaSaidaDaemon.java:575)
at gov.sefaz.util.pdes.ProcessadorDiretorioEntradaSaidaDaemon.varrerDiretorioUsingStrategy(ProcessadorDiretorioEntradaSaidaDaemon.java:526)
at gov.sefaz.util.pdes.ProcessadorDiretorioEntradaSaidaDaemon.run(ProcessadorDiretorioEntradaSaidaDaemon.java:458)
at java.lang.Thread.run(Unknown Source)
I found some issues on Google that says it shutdown because other errors. But in my case there isn't any.
It shutdown without reason.
Has anyone seen this before?
The problem occurs if a hazelcast member doesn't shutdown normally (terminate), When you stop HazelcastInstance which queue proxy is bound to; then any operation on that queue after instance stopped should throw HazelcastInstanceNotActiveException.
You just need to wait for a few minutes and rerun your job.