How to speed up seda shutdown? - java

Is there any way to stop SedaConsumer without waiting for BlockingQueue.take(pollTimeout, ...) to return? I have a lot of sedas in my application and a graceful shutdown takes a lot of time. When DefaultShutdownStrategy shutdown sedaConsumers there are no more messages in the queue and no more messages will be produced (because of implementation of routes shutdown before). So each sedaConsumer has to wait about 1 second.
Is it possible to force doStop instead of prepareShutdown for seda? Or interrupt workers threads?
I know I can decrease pollTimeout, but I afraid it will affect runtime performance.

In SedaConsumer.java:
try {
// use the end user configured poll timeout
exchange = queue.poll(pollTimeout, TimeUnit.MILLISECONDS);
// Omitted
} catch (InterruptedException e) {
LOG.debug("Sleep interrupted, are we stopping? {}", isStopping() || isStopped());
continue;
} catch (Throwable e) {
if (exchange != null) {
getExceptionHandler().handleException("Error processing exchange", exchange, e);
} else {
getExceptionHandler().handleException(e);
}
}
This construct is at most places in the thread where an InterruptedException can be thrown so if the consumer is stopping and is interrupted it will stop gracefully.

Related

Pulsar client thread balance

I'm trying to implement a Pulsar client with multiple producers that distributes the load among the threads, but regardless the value passed on ioThreads() and on listenerThreads(), it is always overloading the first thread (> 65% cpu while the other threads are completely idle)
I have tried a few things including this "dynamic rebalancing" every hour(last method) but closing it in the middle of the process certainly is not the best approach
This is the relevant code
...
// pulsar client
pulsarClient = PulsarClient.builder() //
.operationTimeout(config.getAppPulsarTimeout(), TimeUnit.SECONDS) //
.ioThreads(config.getAppPulsarClientThreads()) //
.listenerThreads(config.getAppPulsarClientThreads()) //
.serviceUrl(config.getPulsarServiceUrl()).build();
...
private createProducers() {
String strConsumerTopic = this.config.getPulsarTopicInput();
List<Integer> protCasesList = this.config.getEventProtoCaseList();
for (Integer e : protCasesList) {
String topicName = config.getPulsarTopicOutput().concat(String.valueOf(e));
LOG.info("Creating producer for topic: {}", topicName);
Producer<byte[]> protobufProducer = pulsarClient.newProducer().topic(topicName).enableBatching(false)
.blockIfQueueFull(true).compressionType(CompressionType.NONE)
.sendTimeout(config.getPulsarSendTimeout(), TimeUnit.SECONDS)
.maxPendingMessages(config.getPulsarMaxPendingMessages()).create();
this.mapLink.put(strConsumerTopic.concat(String.valueOf(e)), protobufProducer);
}
}
public void closeProducers() {
String strConsumerTopic = this.config.getPulsarTopicInput();
List<Integer> protCasesList = this.config.getEventProtoCaseList();
for (Integer e : protCasesList) {
try {
this.mapLink.get(strConsumerTopic.concat(String.valueOf(e))).close();
LOG.info("{} producer correctly closed...",
this.mapLink.get(strConsumerTopic.concat(String.valueOf(e))).getProducerName());
} catch (PulsarClientException e1) {
LOG.error("Producer: {} not closed cause: {}",
this.mapLink.get(strConsumerTopic.concat(String.valueOf(e))).getProducerName(),
e1.getMessage());
}
}
}
public void rebalancePulsarThreads(boolean firstRun) {
ThreadMXBean threadHandler = ManagementFactory.getThreadMXBean();
ThreadInfo[] threadsInfo = threadHandler.getThreadInfo(threadHandler.getAllThreadIds());
for (ThreadInfo threadInfo : threadsInfo) {
if (threadInfo.getThreadName().contains("pulsar-client-io")) {
// enable cpu time for all threads
threadHandler.setThreadCpuTimeEnabled(true);
// get cpu time for this specific thread
long threadCPUTime = threadHandler.getThreadCpuTime(threadInfo.getThreadId());
int thresholdCPUTime = 65;
if (threadCPUTime > thresholdCPUTime) {
LOG.warn("Pulsar client thread with CPU time greater than {}% - REBALANCING now", thresholdCPUTime);
try {
closeProducers();
} catch (Exception e) {
if (!firstRun) {
// producers will not be available in the first run
// therefore, the logging only happens when it is not the first run
LOG.warn("Unable to close Pulsar client threads on rebalancing: {}", e.getMessage());
}
}
try {
createPulsarProducers();
} catch (Exception e) {
LOG.warn("Unable to create Pulsar client threads on rebalancing: {}", e.getMessage());
}
}
}
}
}
From what you describe, the most likely scenario is that all the topics you're using are served by one single broker.
If that's indeed the case, and avoiding topic load balancing across brokers, it's normal that it's using a single thread because all these producers will be sharing a single, pooled, TCP connection and each connection is assigned to 1 IO thread (listener threads are used for consumer listeners).
If you want to force more threads, you can increase the "Max TCP connection per each broker" setting, in order to use all the configured IO threads.
eg:
PulsarClient client = PulsarClient.builder()
.serviceUrl("pulsar://localhost:6650")
.ioThreads(16)
.connectionsPerBroker(16)
.create();

Looking for ways to detect AWS Lambda timeouts(few seconds before timeout) in Java and to test the same

My current Lambda function is calling a 3rd party web service Synchronously.This function occasionally times out (current timeout set to 25s and cannot be increased further)
My code is something like:
handleRequest(InputStream input, OutputStream output, Context context) throws IOException {
try{
response = calling 3rd party REST service
}catch(Exception e){
//handle exceptions
}
}
1)I want to custom handle the timeout (tracking the time and handling few milli seconds before actual timeout) within my Lambda function by sending a custom error message back to the client.
How can I effectively use the
context.getRemainingTimeInMillis()
method to track the time remaining while my synchronous call is running? Planning to call the context.getRemainingTimeInMillis() asynchronously.Is that the right approach?
2)What is a good way to test the timeout custom functionality ?
I solved my problem by increasing the Lambda timeout and invoking my process in a new thread and timing out the Thread after n seconds.
ExecutorService service = Executors.newSingleThreadExecutor();
try {
Runnable r = () ->{
try {
myFunction();
} catch (Exception e) {
e.printStackTrace();
}
};
f = service.submit(r);
f.get(n, TimeUnit.MILLISECONDS);// attempt the task for n milliseconds
}catch(TimeoutException toe){
//custom logic
}
Another option is to use the
readTimeOut
property of the RestClient(in my case Jersey) to set the timeout.But I see that this property is not working consistently within the Lambda code.Not sure if it's and issue with the Jersey client or the Lambda.
You can try with cancellation token to return custom exceptions with lambda before timeout.
try
{
var tokenSource = new CancellationTokenSource(TimeSpan.FromMinutes(1)); // set timeout value
var taskResult = ApiCall(); // call web service method
while (!taskResult.IsCompleted)
{
if (tokenSource.IsCancellationRequested)
{
throw new OperationCanceledException("time out for lambda"); // throw custom exceptions eg : OperationCanceledException
}
}
return taskResult.Result;
}
catch (OperationCanceledException ex)
{
// handle exception
}

How to check that KafkaConsumer still has assigned partitions without reading more data with poll()

In my KafkaConsumer app I want to read a batch of messages with poll() and process them. But processing may fail. In this case I want to retry until I succeed but only retry if consumer still owns partitions. I don't want to constantly call poll() because I don't want to read more data.
This is a code snippet:
consumer = new KafkaConsumer<>(consumerConfig);
try {
consumer.subscribe(config.topics() /** Callback does not work as I do not call poll in between */ );
while (true) {
ConsumerRecords<byte[], Value> values = consumer.poll(10000);
while (/* I am still owner of partitions */) {
try {
process(values);
} catch (Exception e) {
log.error("I dont care, just retry while I own the partitions", e)
}
}
}
} catch (WakeupException e) {
// shutting down
} finally {
consumer.close();
}
There is a callback method that tells you when your consumers partition assignments are about to be revoked. Keep processing message unless you get an onPartitionRevoked() event.
https://kafka.apache.org/0110/javadoc/org/apache/kafka/clients/consumer/ConsumerRebalanceListener.html#onPartitionsRevoked(java.util.Collection)
What about simply calling assignment() ?
http://kafka.apache.org/0110/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#assignment()
I came to a conclusion that it is impossible to call poll() without reading messages with current kafka consumer 10.2.x. However, it is possible to update offset after a processing failure. So I update offset as if the messages were never read
while (!stopped) {
ConsumerRecords<byte[], Value> values = consumer.poll(timeout);
try {
process(values);
} catch (Exception e) {
rewind(records);
// Ensure a delay after errors to let dependencies recover
Thread.sleep(delay);
}
}
and rewind method is
private void rewind(ConsumerRecords<byte[], Value> records) {
records.partitions().forEach(partition -> {
long offset = records.records(partition).get(0).offset();
consumer.seek(partition, offset);
});
}
It solves the initial problem

RabbitMQ Java client - How to sensibly handle exceptions and shutdowns?

Here's what I know so far (please correct me):
In the RabbitMQ Java client, operations on a channel throw IOException when there is a general network failure (malformed data from broker, authentication failures, missed heartbeats).
Operations on a channel can also throw the ShutdownSignalException unchecked exception, typically an AlreadyClosedException when we tried to perform an action on the channel/connection after it has been shut down.
The shutting down process happens in the event of "network failure, internal failure or explicit local shutdown" (e.g. via channel.close() or connection.close()). The shutdown event propagates down the "topology", from Connection -> Channel -> Consumer, and when the Channel it calls the Consumer's handleShutdown() method gets called.
A user can also add a shutdown listener which is called after the shutdown process completes.
Here is what I'm missing:
Since an IOException indicates a network failure, does it also initiate a shutdown request?
How does using auto-recovery mode affect shutdown requests? Does it cause channel operations to block while it tries to reconnect to the channel, or will the ShutdownSignalException still be thrown?
Here is how I'm handling exceptions at the moment, is this a sensible approach?
My setup is that I'm polling a QueueingConsumer and dispatching tasks to a worker pool. The rabbitmq client is encapsulated in MyRabbitMQWrapper here. When an exception occurs polling the queue I just gracefully shutdown everything and restart the client. When an exception occurs in the worker I also just log it and finish the worker.
My biggest worry (related to Question 1): Suppose an IOException occurs in the worker, then the task doesn't get acked. If the shutdown does not then occur, I now have an un-acked task that will be in limbo forever.
Pseudo-code:
class Main {
public static void main(String[] args) {
while(true) {
run();
//Easy way to restart the client, the connection has been
//closed so RabbitMQ will re-queue any un-acked tasks.
log.info("Shutdown occurred, restarting in 5 seconds");
Thread.sleep(5000);
}
}
public void run() {
MyRabbitMQWrapper rw = new MyRabbitMQWrapper("localhost");
try {
rw.connect();
while(!Thread.currentThread().isInterrupted()) {
try {
//Wait for a message on the QueueingConsumer
MyMessage t = rw.getNextMessage();
workerPool.submit(new MyTaskRunnable(rw, t));
} catch (InterruptedException | IOException | ShutdownSignalException e) {
//Handle all AMQP library exceptions by cleaning up and returning
log.warn("Shutting down", e);
workerPool.shutdown();
break;
}
}
} catch (IOException e) {
log.error("Could not connect to broker", e);
} finally {
try {
rw.close();
} catch(IOException e) {
log.info("Could not close connection");
}
}
}
}
class MyTaskRunnable implements Runnable {
....
public void run() {
doStuff();
try {
rw.ack(...);
} catch (IOException | ShutdownSignalException e) {
log.warn("Could not ack task");
}
}
}

Need an interruptable way to listen for UDP packets in a worker thread

I'm developing a Google Glass app which needs to listen for UDP packets in a worker thread (integrating with an existing system which sends UDP packets). I previously posted a question (see here) and received an answer which provided some guidance on how to do this. Using the approach in the other discussion I'll have a worker thread which is blocked on DatagramSocket.receive().
Further reading suggests to me that I'll need to be able to start/stop this on demand. So this brings me to the question I'm posting here. How can I do the above in such a way as to be able to interrupt (gracefully) the UDP listening? Is there some way I can "nicely" ask the socket to break out of the receive() call from another thread?
Or is there another way to listen for UDP packets in an interruptable fashion so I can start/stop the listener thread as needed in response to device events?
My recommendation:
private DatagramSocket mSocket;
#Override
public void run() {
Exception ex = null;
try {
// read while not interrupted
while (!interrupted()) {
....
mSocket.receive(...); // excepts when interrupted
}
} catch (Exception e) {
if (interrupted())
// the user did it
else
ex = e;
} finally {
// always release
release();
// rethrow the exception if we need to
if (ex != null)
throw ex;
}
}
public void release() {
// causes exception if in middle of rcv
if (mSocket != null) {
mSocket.close();
mSocket = null;
}
}
#Override
public void interrupt() {
super.interrupt();
release();
}
clean cut, simple, always releases and interrupting stops you cleanly in 2 cases.

Categories