Kinesis Java consumer - keepalive/heartbeat? - java

Ive written a Java service that consumes from a Kinesis topic. The service starts / runs well and consumes happily as long as the data doesn't get too infrequent. If there is a gap of > 60-90 minutes it stops consuming. No errors are issued but subsequent data queues up in Kinesis and sits until the service is restarted.
Does Kinesis have some sort of heartbeat or keepalive message that needs to send during these quiet periods?
I looked through the configuration (KinesisClientLibConfiguration) and didn't see anything obvious. Hopefully this won't entail cycling the connection on an hourly basis.
Edit:
KinesisClientLibConfiguration kinesisClientLibConfiguration =
new KinesisClientLibConfiguration(config.getString("appname"),
config.getString("kinesis/stream_name"),
kinesisCredentialsProvider, localProvider,
localProvider, workerId);
kinesisClientLibConfiguration.withInitialPositionInStream(
InitialPositionInStream.TRIM_HORIZON);
Edit:
I managed to find some error output - lots of these:
com.amazonaws.services.kinesis.clientlibrary.lib.worker.ProcessTask call
SEVERE: ShardId shardId-000000000000: Caught exception:
com.amazonaws.services.kinesis.model.AmazonKinesisException: The security token included in the request is expired (Service: AmazonKinesis; Status Code: 400; Error Code: ExpiredTokenException; Request ID: cdb95cb6-23bb-0067-9c7b-1ad1125d7b7e)
These messages start precisely 60 minutes after the app starts. Found this reference saying to 'refresh 5 minutes before expiration'. Given that I have two kinds of credentials in this call (one for kinesis and one for dynamodb/cloud watch) I'll try a timer to .refresh().

(here's what worked)
Note: this involves two Credentials sources - a local and a remote. The local ones are for DynamoDB and CloudWatch. The remote is for Kinesis.
AWSCredentialsProvider localProvider = new ClasspathPropertiesFileCredentialsProvider("credentials");
STSAssumeRoleSessionCredentialsProvider stsRoleCredentials = new STSAssumeRoleSessionCredentialsProvider.Builder(
config.getString("kinesis/arn"), config.getString("kinesis/role_session_name"))
.withExternalId(config.getString("kinesis/external_id")).build();
KinesisClientLibConfiguration kinesisClientLibConfiguration = new KinesisClientLibConfiguration(
config.getString("appname"),
config.getString("kinesis/stream_name"),
stsRoleCredentials, localProvider, localProvider, workerId);
If you use a CredentialsProvider it will do the token refresh for you. After some exploration I found that the .refresh() calls I was making were to empty functions.

Related

Unable to execute HTTP request: Acquire operation took longer than the configured maximum time

I am running a batch job in AWS which consumes messages from a SQS queue and writes them to a Kafka topic using akka. I've created a Sqs Async Client with the following parameters:
private static SqsAsyncClient getSqsAsyncClient(final Config configuration, final String awsRegion) {
var asyncHttpClientBuilder = NettyNioAsyncHttpClient.builder()
.maxConcurrency(100)
.maxPendingConnectionAcquires(10_000)
.connectionMaxIdleTime(Duration.ofSeconds(60))
.connectionTimeout(Duration.ofSeconds(30))
.connectionAcquisitionTimeout(Duration.ofSeconds(30))
.readTimeout(Duration.ofSeconds(30));
return SqsAsyncClient.builder()
.region(Region.of(awsRegion))
.httpClientBuilder(asyncHttpClientBuilder)
.endpointOverride(URI.create("https://sqs.us-east-1.amazonaws.com/000000000000")).build();
}
private static SqsSourceSettings getSqsSourceSettings(final Config configuration) {
final SqsSourceSettings sqsSourceSettings = SqsSourceSettings.create().withCloseOnEmptyReceive(false);
if (configuration.hasPath(ConfigPaths.SqsSource.MAX_BATCH_SIZE)) {
sqsSourceSettings.withMaxBatchSize(10);
}
if (configuration.hasPath(ConfigPaths.SqsSource.MAX_BUFFER_SIZE)) {
sqsSourceSettings.withMaxBufferSize(1000);
}
if (configuration.hasPath(ConfigPaths.SqsSource.WAIT_TIME_SECS)) {
sqsSourceSettings.withWaitTime(Duration.of(20, SECONDS));
}
return sqsSourceSettings;
}
But, whilst running my batch job I get the following AWS SDK exception:
software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: Acquire operation took longer than the configured maximum time. This indicates that a request cannot get a connection from the pool within the specified maximum time. This can be due to high request rate.
The exception still seems to occur even after I try tweaking the parameters mentioned here:
Consider taking any of the following actions to mitigate the issue: increase max connections, increase acquire timeout, or slowing the request rate. Increasing the max connections can increase client throughput (unless the network interface is already fully utilized), but can eventually start to hit operation system limitations on the number of file descriptors used by the process. If you already are fully utilizing your network interface or cannot further increase your connection count, increasing the acquire timeout gives extra time for requests to acquire a connection before timing out. If the connections doesn't free up, the subsequent requests will still timeout. If the above mechanisms are not able to fix the issue, try smoothing out your requests so that large traffic bursts cannot overload the client, being more efficient with the number of times you need to call AWS, or by increasing the number of hosts sending requests
Has anyone run into this issue before?
I encountered the same issue, and I ended up firing 100 async batch requests then wait for those 100 to get cleared before firing another 100 and so on.

Periods of prolonged inactivity and frequent MessageLockLostException in QueueClient

Background
We have a data transfer solution with Azure Service Bus as the message broker. We are transferring data from x datasets through x queues - with x dedicated QueueClients as senders. Some senders publish messages at the rate of one message every two seconds, while others publish one every 15 minutes.
The application on the data source side (where senders are) is working just fine, giving us the desired throughput.
On the other side, we have an application with one QueueClient receiver per queue with the following configuration:
maxConcurrentCalls = 1
autoComplete = true (if receive mode = RECEIVEANDDELETE) and false (if receive mode = PEEKLOCK) - we have some receivers where, if they shut-down unexpectedly, would want to preserve the messages in the Service Bus Queue.
maxAutoRenewDuration = 3 minutes (lock duraition on all queues = 30 seconds)
an Executor service with a single thread
The MessageHandler registered with each of these receivers does the following:
public CompletableFuture<Void> onMessageAsync(final IMessage message) {
// deserialize the message body
final CustomObject customObject = (CustomObject)SerializationUtils.deserialize((byte[])message.getMessageBody().getBinaryData().get(0));
// process processDB1() and processDB2() asynchronously
final List<CompletableFuture<Boolean>> processFutures = new ArrayList<CompletableFuture<Boolean>>();
processFutures.add(processDB1(customObject)); // processDB1() returns Boolean
processFutures.add(processDB2(customObject)); // processDB2() returns Boolean
// join both the completablefutures to get the result Booleans
List<Boolean> results = CompletableFuture.allOf(processFutures.toArray(new CompletableFuture[processFutures.size()])).thenApply(future -> processFutures.stream()
.map(CompletableFuture<Boolean>::join).collect(Collectors.toList())
if (results.contains(false)) {
// dead-letter the message if results contains false
return getQueueClient().deadLetterAsync(message.getLockToken());
} else {
// complete the message otherwise
getQueueClient().completeAsync(message.getLockToken());
}
}
We tested with the following scenarios:
Scenario 1 - receive mode = RECEIVEANDDELETE, message publish rate: 30/ minute
Expected Behavior
The messages should be received continuosuly with a constant throughput (which need not necessarily be the throughput at source, where messages are published).
Actual behavior
We observe random, long periods of inactivity from the QueueClient - ranging from minutes to hours - there is no Outgoing Messages from the Service Bus namespace (observed on the Metrics charts) and there are no consumption logs for the same time periods!
Scenario 2 - receive mode = PEEKLOCK, message publish rate: 30/ minute
Expected Behavior
The messages should be received continuosuly with a constant throughput (which need not necessarily be the throughput at source, where messages are published).
Actual behavior
We keep seeing MessageLockLostException constantly after 20-30 minutes into the run of the application.
We tried doing the following -
we reduced the prefetch count (from 20 * processing rate - as mentioned in the Best Practices guide) to a bare minimum (to even 0 in one test cycle), to reduce the no. of messages that are locked for the client
increased the maxAutoRenewDuration to 5 minutes - our processDB1() and processDB2() do not take more than a second or two for almost 90% of the cases - so, I think the lock duration of 30 seconds and maxAutoRenewDuration are not issues here.
removed the blocking CompletableFuture.get() and made the processing synchronous.
None of these tweaks helped us fix the issue. What we observed is that the COMPLETE or RENEWMESSAGELOCK are throwing the MessageLockLostException.
We need help with finding answers for the following:
why is there a long period of inactivity of the QueueClient in scenario 1?
how do we know the MessageLockLostExceptions are thrown, because the locks have indeed expired? we suspect the locks cannot expire too soon, as our processing happens in a second or two. disabling prefetch also did not solve this for us.
Versions and Service Bus details
Java - openjdk-11-jre
Azure Service Bus namespace tier: Standard
Java SDK version - 3.4.0
For Scenario 1 :
If you have the duplicate detection history enabled, there is a possibility of this behavior happening as per the below explained scenario :
I had enabled for 30 seconds. I constantly hit Service bus with duplicate messages ( im my case messages with the same messageid from the client - 30 /per minute). I would be seeing a no activity outgoing for the window. Though the messages are received at the servicebus from the sending client, I was not be able to see them in outgoing messages. You could probably check whether you re encountering the duplicate messages which are filtered - inturn resulting inactivity in outgoing.
Also Note : You can't enable/disable duplicate detection after the queue is created. You can only do so at the time of creating the queue.
The issue was not with the QueueClient object per se. It was with the processes that we were triggering from within the MessageHandler: processDB1(customObject) and processDB2(customObject). since these processes were not optimized, the message consumption dropped and the locks gor expired (in peek-lock mode), as the handler was spending more time (in relation to the rate at which messages were published to the queues) in completing these opertations.
After optimizing the processes, the consumption and completion (in peek-lock mode) were just fine.

Google PubSub Java (Scala) Client Gets Excessive Amount of Resent Messages

I have a scenario where I load a subscription with around 1100 messages. I then start a Spark job which pulls messages from this subscription with these settings:
MaxOutstandingElementCount: 5
MaxAckExtensionPeriod: 60 min
AckDeadlineSeconds: 600
The first message to get processed starts a cache generation which takes about 30 minutes to complete. Any other messages arriving while this is going on are simply "returned" with no ack or nack. After that, a given message takes between 1 min and 30 mins to process. With an ack extension period of 60 min, I would never expect to see resending of messages.
The behaviour I am seeing is that while the initial cache is being generated, every 10 minutes 5 new messages are grabbed by the client and returned with no ack or nack by my code. This is unexpected. I would expect the deadline of the original 5 messages to be extended up to an hour.
Furthermore, after having processed and acked about 500 of the messages, I would expect around 600 left in the subscription, but I see almost the original 1100. These turn out to be resent duplicates, as I log these in my code. This is also very unexpected.
This is a screenshot from google console after around 500 messages have been processed and acked (ignore the first "hump", that was an aborted test run):
Am I missing something?
Here is the setup code:
val name = ProjectSubscriptionName.of(ConfigurationValues.ProjectId,
ConfigurationValues.PubSubSubscription)
val topic = ProjectTopicName.of(ConfigurationValues.ProjectId,
ConfigurationValues.PubSubSubscriptionTopic)
val pushConfig = PushConfig.newBuilder.build
val ackDeadlineSeconds = 600
subscriptionAdminClient.createSubscription(
name,
topic,
pushConfig,
ackDeadlineSeconds)
val flowControlSettings = FlowControlSettings.newBuilder()
.setMaxOutstandingElementCount(5L)
.build();
// create a subscriber bound to the asynchronous message receiver
val subscriber = Subscriber
.newBuilder(subscriptionName, new EtlMessageReceiver(spark))
.setFlowControlSettings(flowControlSettings)
.setMaxAckExtensionPeriod(Duration.ofMinutes(60))
.build
subscriber.startAsync.awaitRunning()
Here is the code in the receiver which runs when a message arrives while the cache is being generated:
if(!BIQConnector.cacheGenerationDone){
Utilities.logLine(
s"PubSub message for work item $uniqueWorkItemId ignored as cache is still being generated.")
return
}
And finally when a message has been processed:
consumer.ack()
Utilities.logLine(s"PubSub message ${message.getMessageId} for $tableName acknowledged.")
// Write back to ETL Manager
Utilities.logLine(
s"Writing result message back to topic ${etlResultTopic} for table $tableName, $tableDetailsForLog.")
sendPubSubResult(importTableName, validTableName, importTimestamp, 2, etlResultTopic, stageJobData,
tableDetailsForLog, "Success", isDeleted)
Is your Spark job using a Pub/Sub client library to pull messages? These libraries should indeed keep extending your message deadlines up to the MaxAckExtensionPeriod you specified.
If your job is using a Pub/Sub client library, this is unexpected behavior. You should contact Google Cloud support with your project name, subscription name, client library version, and a sample of the message IDs from the messages you are "returning" without acking. They will be able to investigate further into why you're receiving these resent messages.

Azure eventhub Kafka org.apache.kafka.common.errors.TimeoutException for some of the records

Have a ArrayList containing 80 to 100 records trying to stream and send each individual record(POJO ,not entire list) to Kafka topic (event hub) . Scheduled a cron job like every hour to send these records(POJO) to event hub.
Able to see messages being sent to eventhub ,but after 3 to 4 successful run getting following exception (which includes several messages being sent and several failing with below exception)
Expiring 14 record(s) for eventhubname: 30125 ms has passed since batch creation plus linger time
Following is the config for Producer used,
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.ACKS_CONFIG, "1");
props.put(ProducerConfig.RETRIES_CONFIG, "3");
Message Retention period - 7
Partition - 6
using spring Kafka(2.2.3) to send the events
method marked as #Async where kafka send is written
#Async
protected void send() {
kafkatemplate.send(record);
}
Expected - No exception to be thrown from kafka
Actual - org.apache.kafka.common.errors.TimeoutException is been thrown
Prakash - we have seen a number of issues where spiky producer patterns see batch timeout.
The problem here is that the producer has two TCP connections that can go idle for > 4 mins - at that point, Azure load balancers close out the idle connections. The Kafka client is unaware that the connections have been closed so it attempts to send a batch on a dead connection, which times out, at which point retry kicks in.
Set connections.max.idle.ms to < 4mins – this allows Kafka client’s network client layer to gracefully handle connection close for the producer’s message-sending TCP connection
Set metadata.max.age.ms to < 4mins – this is effectively a keep-alive for the producer metadata TCP connection
Feel free to reach out to the EH product team on Github, we are fairly good about responding to issues - https://github.com/Azure/azure-event-hubs-for-kafka
This exception indicates you are queueing records at a faster rate than they can be sent. Once a record is added a batch, there is a time limit for sending that batch to ensure it has been sent within a specified duration. This is controlled by the Producer configuration parameter, request.timeout.ms. If the batch has been queued longer than the timeout limit, the exception will be thrown. Records in that batch will be removed from the send queue.
Please check the below for similar issue, this might help better.
Kafka producer TimeoutException: Expiring 1 record(s)
you can also check this link
when-does-the-apache-kafka-client-throw-a-batch-expired-exception/34794261#34794261 for reason more details about batch expired exception.
Also implement proper retry policy.
Note this does not account any network issues scanner side. With network issues you will not be able to send to either hub.
Hope it helps.

Understanding Long Polling Client Logic for Use With Azure Groovy Java

Premise:
We have groovy scripts that execute every minute. I want one of those scripts to open an HTTP client, and poll a service bus queue / topic for messages. I have my rest client code working an getting messages from the service bus queue. I can do a "Get" every 5 seconds, and wireshark shows that it's reusing the same TCP connection which is better than I expected, but its still not ideal.
Goal:
I would like to make this http client do "long polling", for efficiency and to achieve actual real-time processing. It seems to be more complicated than I anticipated.
Problem:
When I do a "Delete" call to read message from a service bus queue, it immediately returns "HTTP/1.1 204 No Content", and the connection closes. I set a timeout on the client, but I don't think that matters.
Here's the article that shows service bus says it's supports long polling, which I imagine is the hard part. Azure Service Bus Queues
I feel that I don't understand something fundamental about how to implement long polling in code. My understanding is that when there is no data in the queue, it's supposed to delay the response until data exists, or until my client eventually times out waiting (which lets me set my own disconnect/reconnect interval). I don't even care about blocking/nonblocking etc, because the script execution is already spreadout into a threadpool, and will be terminated forcibly and all that.
Any help is greatly appreciated.
The correct and simple answer is that adding the following to the end of an Azure REST API URL (with service bus) is the way to implements long-polling with that service: ?timeout=60 , where 60 tells azure to wait 60 seconds before responding with no-data. So, your application can check for data every 60 seconds, with an internal timeout of 60 seconds on each HTTP request. This will hold the TCP connection open for that timeframe, waiting for an HTTP response.
For understanding long polling, I recommend you can learn the entry Comet of Wiki https://en.wikipedia.org/wiki/Comet_(programming). And there is an answered thread (Long polling in java) explained the mechanism of the HttpURLConnection Class support long polling in Java.
As I know, a simple way in Java Client instead of HttpURLConnection is using the client library of CometD. You can refer to the section Client Library of its offical document https://docs.cometd.org/current/reference/#_java_client to learn how to implement the long polling client in Java. You can download the library at https://download.cometd.org/.
The sample code from the offical document:
// Create (and eventually set up) Jetty's HttpClient:
HttpClient httpClient = new HttpClient();
httpClient.start();
// Prepare the transport
Map<String, Object> options = new HashMap<String, Object>();
ClientTransport transport = new LongPollingTransport(options, httpClient);
// Create the BayeuxClient
ClientSession client = new BayeuxClient("http://localhost:8080/cometd", transport);
// Here set up the BayeuxClient, for example:
// client.getChannel(Channel.META_CONNECT).addListener(new ClientSessionChannel.MessageListener() {
public void onMessage(ClientSessionChannel channel, Message message) {
if (message.isSuccessful()) {
// Here handshake is successful
}
}
});
client.handshake();
Note: There are two REST API of Azure Service Bus for getting messaging entity(s) Get Entity https://msdn.microsoft.com/en-us/library/azure/hh780754.aspx and Entities Discovery https://msdn.microsoft.com/en-us/library/azure/hh780782.aspx. You need to delete the used messaging entity manually thru the Delete Entity REST API. Requesting all of these REST API first require an access_token thru the post request the Request a Token from ACS API for secure access.

Categories