Catch-all exception handling for outbound ChannelHandler - java

In Netty you have the concept of inbound and outbound handlers. A catch-all inbound exception handler is implemented simply by adding a channel handler at the end (the tail) of the pipeline and implementing an exceptionCaught override. The exception happening along the inbound pipeline will travel along the handlers until meeting the last one, if not handled along the way.
There isn't an exact opposite for outgoing handlers. Instead (according to Netty in Action, page 94) you need to either add a listener to the channel's Future or a listener to the Promise passed into the write method of your Handler.
As I am not sure where to insert the former, I thought I'd go for the latter, so I made the following ChannelOutboundHandler:
/**
* Catch and log errors happening in the outgoing direction
*
* #see <p>p94 in "Netty In Action"</p>
*/
private ChannelOutboundHandlerAdapter createOutgoingErrorHandler() {
return new ChannelOutboundHandlerAdapter() {
#Override
public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) {
logger.info("howdy! (never gets this far)");
final ChannelFutureListener channelFutureListener = future -> {
if (!future.isSuccess()) {
future.cause().printStackTrace();
// ctx.writeAndFlush(serverErrorJSON("an error!"));
future.channel().writeAndFlush(serverErrorJSON("an error!"));
future.channel().close();
}
};
promise.addListener(channelFutureListener);
ctx.write(msg, promise);
}
};
This is added to the head of the pipeline:
#Override
public void addHandlersToPipeline(final ChannelPipeline pipeline) {
pipeline.addLast(
createOutgoingErrorHandler(),
new HttpLoggerHandler(), // an error in this `write` should go "up"
authHandlerFactory.get(),
// etc
The problem is that the write method of my error handler is never called if I throw a runtime exception in the HttpLoggerHandler.write().
How would I make this work? An error in any of the outgoing handlers should "bubble up" to the one attached to the head.
An important thing to note is that I don't merely want to close the channel, I want to write an error message back to the client (as seen from serverErrorJSON('...'). During my trials of shuffling around the order of the handlers (also trying out stuff from this answer), I have gotten the listener activated, but I was unable to write anything. If I used ctx.write() in the listener, it seems as if I got into a loop, while using future.channel().write... didn't do anything.

I found a very simple solution that allows both inbound and outbound exceptions to reach the same exception handler positioned as the last ChannelHandler in the pipeline.
My pipeline is setup as follows:
//Inbound propagation
socketChannel.pipeline()
.addLast(new Decoder())
.addLast(new ExceptionHandler());
//Outbound propagation
socketChannel.pipeline()
.addFirst(new OutboundExceptionRouter())
.addFirst(new Encoder());
This is the content of my ExceptionHandler, it logs caught exceptions:
public class ExceptionHandler extends ChannelInboundHandlerAdapter {
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
log.error("Exception caught on channel", cause);
}
}
Now the magic that allows even outbound exceptions to be handled by ExceptionHandler happens in the OutBoundExceptionRouter:
public class OutboundExceptionRouter extends ChannelOutboundHandlerAdapter {
#Override
public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) throws Exception {
promise.addListener(ChannelFutureListener.FIRE_EXCEPTION_ON_FAILURE);
super.write(ctx, msg, promise);
}
}
This is the first outbound handler invoked in my pipeline, what it does is add a listener to the outbound write promise which will execute future.channel().pipeline().fireExceptionCaught(future.cause()); when the promise fails. The fireExceptionCaught method propagates the exception through the pipeline in the inbound direction, eventually reaching the ExceptionHandler.
In case anyone is interested, as of Netty 4.1, the reason why we need to add a listener to get the exception is because after performing a writeAndFlush to the channel, the invokeWrite0 method is called in AbstractChannelHandlerContext.java which wraps the write operation in a try catch block. The catch block notifies the Promise instead of calling fireExceptionCaught like the invokeChannelRead method does for inbound messages.

Basically what you did is correct... The only thing that is not correct is the order of the handlers. Your ChannelOutboundHandlerAdapter mast be placed "as last outbound handler" in the pipeline. Which means it should be like this:
pipeline.addLast(
new HttpLoggerHandler(),
createOutgoingErrorHandler(),
authHandlerFactory.get());
The reason for this is that outbound events from from the tail to the head of the pipeline while inbound events flow from the head to the tail.

There does not seem to be a generalized concept of a catch-all exception handler for outgoing handlers that will catch errors regardless of where. This means, unless you registered a listener to catch a certain error a runtime error will probably result in the error being "swallowed", leaving you scratching your head for why nothing is being returned.
That said, maybe it doesn't make sense to have a handler/listener that always will execute given an error (as it needs to be very general), but it does make logging errors a bit tricker than need be.
After writing a bunch of learning tests (which I suggest checking out!) I ended up with these insights, which are basically the names of my JUnit tests (after some regex manipulation):
a listener can write to a channel after the parent write has completed
a write listener can remove listeners from the pipeline and write on an erronous write
all listeners are invoked on success if the same promise is passed on
an error handler near the tail cannot catch an error from a handler nearer the head
netty does not invoke the next handlers write on runtime exception
netty invokes a write listener once on a normal write
netty invokes a write listener once on an erronous write
netty invokes the next handlers write with its written message
promises can be used to listen for next handlers success or failure
promises can be used to listen for non immediate handlers outcome if the promise is passed on
promises cannot be used to listen for non immediate handlers outcome if a new promise is passed on
promises cannot be used to listen for non immediate handlers outcome if the promise is not passed on
only the listener added to the final write is invoked on error if the promise is not passed on
only the listener added to the final write is invoked on success if the promise is not passed on
write listeners are invoked from the tail
This insight means, given the example in the question, that if an error should arise near the tail and authHandler does not pass the promise on, then the error handler near the head will never be invoked, as it is being supplied with a new promise, as ctx.write(msg) is essentially ctx.channel.write(msg, newPromise()).
In our situation we ended up solving the situation by injecting the same shareable error handling inbetween all the business logic handlers.
The handler looked like this
#ChannelHandler.Sharable
class OutboundErrorHandler extends ChannelOutboundHandlerAdapter {
private final static Logger logger = LoggerFactory.getLogger(OutboundErrorHandler.class);
private Throwable handledCause = null;
#Override
public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) {
ctx.write(msg, promise).addListener(writeResult -> handleWriteResult(ctx, writeResult));
}
private void handleWriteResult(ChannelHandlerContext ctx, Future<?> writeResult) {
if (!writeResult.isSuccess()) {
final Throwable cause = writeResult.cause();
if (cause instanceof ClosedChannelException) {
// no reason to close an already closed channel - just ignore
return;
}
// Since this handler is shared and added multiple times
// we need to avoid spamming the logs N number of times for the same error
if (handledCause == cause) return;
handledCause = cause;
logger.error("Uncaught exception on write!", cause);
// By checking on channel writability and closing the channel after writing the error message,
// only the first listener will signal the error to the client
final Channel channel = ctx.channel();
if (channel.isWritable()) {
ctx.writeAndFlush(serverErrorJSON(cause.getMessage()), channel.newPromise());
ctx.close();
}
}
}
}
Then in our pipeline setup we have this
// Prepend the error handler to every entry in the pipeline.
// The intention behind this is to have a catch-all
// outbound error handler and thereby avoiding the need to attach a
// listener to every ctx.write(...).
final OutboundErrorHandler outboundErrorHandler = new OutboundErrorHandler();
for (Map.Entry<String, ChannelHandler> entry : pipeline) {
pipeline.addBefore(entry.getKey(), entry.getKey() + "#OutboundErrorHandler", outboundErrorHandler);
}

Related

Retrieve amqp queue name within global error handler

I am implementing a global error handler in a complex system (many queues, many listeners). Inside the handling method, I need to retrieve the name of the queue the message was consumed from. Is that even possible?
My scenario (for full context, but feel free to ignore what follows and focus on the question only)
I want to use the global error handler to catch any non-fatal exception and enqueue the message into a "retry" exchange bound to a "retry" queue with an x-message-ttl of, say, a few seconds and a x-dead-letter-exchange set to the default exchange. I want to set the message's routing key to the queue the message came from so the default exchange will resend it to its original queue. This way all consumers will retry consuming any failed message with a delay, preventing the infamous infinite-retry loop. Hardcoding each queue manually on each consumer is obviously not suitable because there are so many consumers that the solution would be unmaintainable.
EDIT: if not within the error handler, is there any other amqp construct that I can use to intercept the listener and add the queue name to, for example, the message headers so that the error handler would have access to it?
I figured it out. I found out that the message carries information about the queue it comes from.
class MyGlobalErrorHandler implements ErrorHandler {
public void handleError(Throwable t) {
String queueName = ((ListenerExecutionFailedException) t)
.getFailedMessage()
.getMessageProperties()
.getConsumerQueue();
// ...
}
}

Java JMS - Message Listener and onException

I have an application with a main thread and a JMS thread which talk to each other through ActiveMQ 5.15.11. I am able to send messages just fine, however I would like a way to send back status or errors. I noticed that the MessageListener allows for onSuccess() and onException(ex) as two events to listen for, however I am finding that only onSuccess() is getting called.
Here are snippets of my code.
JMS Thread:
ConnectionFactory factory = super.getConnectionFactory();
Connection connection = factory.createConnection();
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
Queue queue = session.createQueue(super.getQueue());
MessageConsumer consumer = session.createConsumer(queue);
consumer.setMessageListener(m -> {
try {
super.processRmbnConfigMsg(m);
} catch (JMSException | IOException e) {
LOG.error(e.getMessage(), e);
// I can only use RuntimeException.
// Also this exception is what I am expecting to get passed to the onException(..)
// call in the main thread.
throw new RuntimeException(e);
}
});
connection.start();
Main thread (sending messages to JMS):
sendMessage(xml, new AsyncCallback() {
#Override
public void onException(JMSException e) {
// I am expecting this to be that RuntimeException from the JMS thread.
LOG.error("Error", e);
doSomethingWithException(e);
}
#Override
public void onSuccess() {
LOG.info("Success");
}
});
What I am expecting is that the exceptions thrown in the new RuntimeException(e) will get picked up on the onException(JMSException e) event listener, in some way, even if the RuntimeException is wrapped.
Instead, I am always getting onSuccess() events. I suppose the onException(..) event happens during communication issues, but I would like a way to send back to the caller exceptions.
How do I accomplish that goal of collecting errors in the JMS thread and sending it back to my calling thread?
Your expectation is based on a fundamental misunderstanding of JMS.
One of the basic tenets of brokered messaging is that producers and consumers are logically disconnected from each other. In other words...A producer sends a message to a broker and it doesn't necessarily care if it is consumed successfully or not, and it certainly won't know who consumes it or have any guarantee when it will be consumed. Likewise, a consumer doesn't necessarily know when or why the message was sent or who sent it. This provides great flexibility between producers and consumers. JMS adheres to this tenet of disconnected producers and consumers.
There is no direct way for a consumer to inform a producer about a problem with the consumption of the message it sent. That said, you can employ what's called a "request/response pattern" so that the consumer can provide some kind of feedback to the producer. You can find an explanation of this pattern along with example code here.
Also, the AsyncCallback class you're using is not part of JMS. I believe it's org.apache.activemq.AsyncCallback provided exclusively by ActiveMQ itself and it only provides callbacks for success or failure for the actual send operation (i.e. not for the consumption of the message).
Lastly, you should know that throwing a RuntimeException from the onMessage method of a javax.jms.MessageListener is considered a "programming error" by the JMS specification and should be avoided. Section 8.7 of the JMS 2 specification states:
It is possible for a listener to throw a RuntimeException; however, this is considered a client programming error. Well behaved listeners should catch such exceptions and attempt to divert messages causing them to some form of application-specific 'unprocessable message' destination.
The result of a listener throwing a RuntimeException depends on the session's acknowledgment mode.
AUTO_ACKNOWLEDGE or DUPS_OK_ACKNOWLEDGE - the message will be immediately redelivered. The number of times a JMS provider will redeliver the same message before giving up is provider-dependent. The JMSRedelivered message header field will be set, and the JMSXDeliveryCount message property incremented, for a message redelivered under these circumstances.
CLIENT_ACKNOWLEDGE - the next message for the listener is delivered. If a client wishes to have the previous unacknowledged message redelivered, it must manually recover the session.
Transacted Session - the next message for the listener is delivered. The client can either commit or roll back the session (in other words, a RuntimeException does not automatically rollback the session).

Handling exceptions in Kafka streams

Had gone through multiple posts but most of them are related handling Bad messages not about exception handling while processing them.
I want to know to how to handle the messages that is been received by the stream application and there is an exception while processing the message? The exception could be because of multiple reasons like Network failure, RuntimeException etc.,
Could someone suggest what is the right way to do? Should I use
setUncaughtExceptionHandler? or is there a better way?
How to handle retries?
it depends what do you want to do with exceptions on producer side.
if exception will be thrown on producer (e.g. due to Network failure or kafka broker has died), stream will die by default. and with kafka-streams version 1.1.0 you could override default behavior by implementing ProductionExceptionHandler like the following:
public class CustomProductionExceptionHandler implements ProductionExceptionHandler {
#Override
public ProductionExceptionHandlerResponse handle(final ProducerRecord<byte[], byte[]> record,
final Exception exception) {
log.error("Kafka message marked as processed although it failed. Message: [{}], destination topic: [{}]", new String(record.value()), record.topic(), exception);
return ProductionExceptionHandlerResponse.CONTINUE;
}
#Override
public void configure(final Map<String, ?> configs) {
}
}
from handle method you could return either CONTINUE if you don't want streams dying on exception, on return FAIL in case you want stream stops (FAIL is default one).
and you need specify this class in stream config:
default.production.exception.handler=com.example.CustomProductionExceptionHandler
Also pay attention that ProductionExceptionHandler handles only exceptions on producer, and it will not handle exceptions during processing message with stream methods mapValues(..), filter(..), branch(..) etc, you need to wrap these method logic with try / catch blocks (put all your method logic into try block to guarantee that you will handle all exceptional cases):
.filter((key, value) -> { try {..} catch (Exception e) {..} })
as I know, we don't need to handle exceptions on consumer side explicitly, as kafka streams will retry automatically consuming later (as offset will not be changed until messages will be consumed and processed); e.g. if kafka broker will be not reachable for some time, you will got exceptions from kafka streams, and when broken will be up, kafka stream will consume all messages. so in this case we will have just delay and nothing corrupted/lost.
with setUncaughtExceptionHandler you will not be able to change default behavior like with ProductionExceptionHandler, with it you could only log error or send message into failure topic.
Update since kafka-streams 2.8.0
since kafka-streams 2.8.0, you have the ability to automatically replace failed stream thread (that caused by uncaught exception)
using KafkaStreams method void setUncaughtExceptionHandler(StreamsUncaughtExceptionHandler eh); with StreamThreadExceptionResponse.REPLACE_THREAD. For more details please take a look at Kafka Streams Specific Uncaught Exception Handler
kafkaStreams.setUncaughtExceptionHandler(ex -> {
log.error("Kafka-Streams uncaught exception occurred. Stream will be replaced with new thread", ex);
return StreamsUncaughtExceptionHandler.StreamThreadExceptionResponse.REPLACE_THREAD;
});
For handling exceptions on the consumer side,
1) You can add a default exception handler in producer with the following property.
"default.deserialization.exception.handler" = "org.apache.kafka.streams.errors.LogAndContinueExceptionHandler";
Basically apache provides three exception handler classes as
1) LogAndContiuneExceptionHandler which you can take as
props.put(StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndContinueExceptionHandler.class);
2) LogAndFailExceptionHandler
props.put(StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndFailExceptionHandler.class);
3) LogAndSkipOnInvalidTimestamp
props.put(StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndSkipOnInvalidTimestamp.class);
For custom exception handling,
1)you can implement the DeserializationExceptionHandler interface and override the handle() method.
2) Or you can extend the above-mentioned classes.
setUncaughtExceptionHandler doesn't help to handle exception, it works after the stream has terminated due to some exception which was not caught.
Kafka provides few ways to handle exceptions. A simple try-catch{} would help catch exceptions in the processor code but kafka deserialization exception (can be due to data issues) and production exception(occurs during communication with broker) requires DeserializationExceptionHandler and ProductionExceptionHandler respectively. By default a kafka application would fail if it encounter any of these.
You can find on this post
In Spring cloud stream you confgure your custom deserialization handler using following:
spring.cloud.stream.kafka.streams.binder.configuration.default.deserialization.exception.handler=your-package-name.CustomLogAndContinueExceptionHandler
CustomLogAndContinueExceptionHandler extends LogAndContinueExceptionHandler or implements DeserializationExceptionHandler
CustomLogAndContinueExceptionHandler DeserializationHandlerResponse.CONTINUE or FAIL depending on your usecase
#Slf4j
public class CustomLogAndContinueExceptionHandler extends LogAndContinueExceptionHandler {
#Override
public DeserializationHandlerResponse handle(ProcessorContext context, ConsumerRecord<byte[], byte[]> record,
Exception exception) {
.... some business logic here ....
log.error("Message failed: taskId: {}, topic: {}, partition: {}, offset: {}, , detailerror : {}",
context.taskId(), record.topic(), record.partition(), record.offset(), exception.getMessage());
return DeserializationHandlerResponse.CONTINUE;
}
}

camel: how can i send to an endpoint asynchronously

How can I send a message to an endpoint without waiting for that endpoint's route to be process (that is, my route should just dispatch the message and finish)?
Using wireTap or multicast is what you're after. A direct: endpoint will modify the Exchange for the next step no matter what ExchangePattern is specified. You can see by using this failing test:
public class StackOverflowTest extends CamelTestSupport {
private static final String DIRECT_INPUT = "direct:input";
private static final String DIRECT_NO_RETURN = "direct:no.return";
private static final String MOCK_OUTPUT = "mock:output";
private static final String FIRST_STRING = "FIRST";
private static final String SECOND_STRING = "SECOND";
#NotNull
#Override
protected RouteBuilder createRouteBuilder() throws Exception {
return new RouteBuilder() {
#Override
public void configure() throws Exception {
from(DIRECT_INPUT)
.to(ExchangePattern.InOnly, DIRECT_NO_RETURN)
.to(MOCK_OUTPUT)
.end();
from(DIRECT_NO_RETURN)
.bean(new CreateNewString())
.end();
}
};
}
#Test
public void testShouldNotModifyMessage() throws JsonProcessingException, InterruptedException {
final MockEndpoint myMockEndpoint = getMockEndpoint(MOCK_OUTPUT);
myMockEndpoint.expectedBodiesReceived(FIRST_STRING);
template.sendBody(DIRECT_INPUT, FIRST_STRING);
assertMockEndpointsSatisfied();
}
public static class CreateNewString {
#NotNull
public String handle(#NotNull Object anObject) {
return SECOND_STRING;
}
}
}
Now if you change the above to a wireTap:
from(DIRECT_INPUT)
.wireTap(DIRECT_NO_RETURN)
.to(MOCK_OUTPUT)
.end();
and you'll see it works as expected. You can also use multicast:
from(DIRECT_INPUT)
.multicast()
.to(DIRECT_NO_RETURN)
.to(MOCK_OUTPUT)
.end();
wireTap(endpoint) is the answer.
you can use a ProducerTemplate's asyncSend() method to send an InOnly message to an endpoint...
template.asyncSend("direct:myInOnlyEndpoint","myMessage");
see http://camel.apache.org/async.html for some more details
That might depend on what endpoints etc you are using, but one common method is to put a seda endpoint in between is one option.
from("foo:bar")
.bean(processingBean)
.to("seda:asyncProcess") // Async send
.bean(moreProcessingBean)
from("seda:asyncProcess")
.to("final:endpoint"); // could be some syncrhonous endpoint that takes time to send to. http://server/heavyProcessingService or what not.
The seda endpoint behaves like a queue, first in - first out. If you dispatch several events to a seda endpoint faster than the route can finish processing them, they will stack up and wait for processing, which is a nice behaviour.
You can use inOnly in your route to only send your message to an endpoint without waiting for a response. For more details see the request reply documentation or the event message documentation
from("direct:testInOnly").inOnly("mock:result");
https://people.apache.org/~dkulp/camel/async.html
Both for InOnly and InOut you can send sync or async. Seems strange that you can send InOnly but async, but at last here it explains that it waits for Camel processing and then fire and forget.
The Async Client API
Camel provides the Async Client API in the ProducerTemplate where we have added about 10 new methods to Camel 2.0. We have listed the most important in the table below:
Method
Returns
Description
setExecutorService
void
Is used to set the Java ExecutorService. Camel will by default provide a ScheduledExecutorService with 5 thread in the pool.
asyncSend
Future
Is used to send an async exchange to a Camel Endpoint. Camel will imeddiately return control to the caller thread after the task has been submitted to the executor service. This allows you to do other work while Camel processes the exchange in the other async thread.
asyncSendBody
Future
As above but for sending body only. This is a request only messaging style so no reply is expected. Uses the InOnly exchange pattern.
asyncRequestBody
Future
As above but for sending body only. This is a Request Reply messaging style so a reply is expected. Uses the InOut exchange pattern.
extractFutureBody
T
Is used to get the result from the asynchronous thread using the Java Concurrency Future handle.
The Async Client API with callbacks
In addition to the Client API from above Camel provides a variation that uses callbacks when the message Exchange is done.
Method
Returns
Description
asyncCallback
Future
In addition a callback is passed in as a parameter using the org.apache.camel.spi.Synchronization Callback. The callback is invoked when the message exchange is done.
asyncCallbackSendBody
Future
As above but for sending body only. This is a request only messaging style so no reply is expected. Uses the InOnly exchange pattern.
asyncCallbackRequestBody
Future
As above but for sending body only. This is a Request Reply messaging style so a reply is expected. Uses the InOut exchange pattern.
These methods also returns the Future handle in case you need them. The difference is that they invokes the callback as well when the Exchange is done being routed.
The Future API
The java.util.concurrent.Future API have among others the following methods:
Method
Returns
Description
isDone
boolean
Returns a boolean whether the task is done or not. Will even return true if the tasks failed due to an exception thrown.
get()
Object
Gets the response of the task. In case of an exception was thrown the java.util.concurrent.ExecutionException is thrown with the caused exception.

Spring MDP - how to shut it down on bad message

I have got a Spring MDP implemented using Spring DefaultMessageListenderContainer listening to an input queue on WebSphere MQ v7.1. If there is a bad message coming in (that causes RuntimeException), what currently happens is that, the transaction is rolled back, and the message is put back into the queue. However the MDP goes into an infinite loop.
Question 1: For my requirements I would like to be able to shut down the processing the moment it sees a bad message. No retries needed. Is it possible to shutdown the message listener gracefully in case it sees a bad message (as opposed to crude System.exit() or methods of that sort)? I definitely don't like it to go into an infinite loop.
Edit:
Question 2: Is there a way to stop or suspend the listener container to stop further processing of messages?
The usual way to process this is to have an error queue and when you see a bad message to put it into the error queue.
Some systems handle this for you such as IBM MQ Series. You just need to configure the error queue and how many retries you want ant it will put it there.
An administrator will then look through these queues and take proper action on the messages that are in the queue (i.e. fix and resubmit them)
Actually, System.exit() is too brutal and... won't work. Retrying of failed messages is handled on the broker (WMQ) side so the message will be redelivered once you restart your application.
The problem you are describing is called poison-message and should be handled on the broker side. It seems to be described in Handling poison messages in WMQ manual and in How WebSphere Application Server handles poison messages.
I solved the problem in the following manner, not sure if this is the best way, however it works.
MDP Implements ApplicationContextAware; I also maintain a listener state (enum with OPEN, CLOSE, ERROR values) MDP Code fragment below:
//context
private ConfigurableApplicationContext applicationContext;
//listener state
private ListenerState listenerState = ListenerState.OPEN;
#Override
public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
this.applicationContext = (ConfigurableApplicationContext) applicationContext;
}
//onMessage method
public void processMessages(....) {
try {
process(...);
} catch (Throwable t) {
listenerState = ListenerState.ERROR;
throw new RuntimeException(...);
}
}
#Override
public void stopContext() {
applicationContext.stop();
}
In the java main that loads the spring context i do this:
//check for errors for exit
Listener listener = (Listener)context.getBean("listener");
listenerContainer listenerContainer =
(ListenerContainer)context.getBean("listenerContainer");
try {
while(true) {
Thread.sleep(1000); //sleep for 1 sec
if(!listener.getListenerState().equals(ListenerState.OPEN)) {
listener.stopContext();
listenerContainer.stop();
System.exit(1);
}> }
} catch (InterruptedException e) {
throw new RuntimeException(e);
}

Categories