Messages are not getting consumed from solace queue - java

I am using spring integration int-jms:message-driven-channel-adapter to consume message from solace queue.
I see below mentioned error in server logs
org.springframework.jms.listener.DefaultMessageListenerContainer- Execution of JMS message listener failed, and no ErrorHandler has been set.
javax.jms.TransactionRolledBackException: Error comitting - transaction rolled back (Transaction '12427' unexpectedly rolled back during commit attempt. (((Client name: xxxx.yyyy.com/7034/#0002000a Local addr: 123123 Remote addr: aaa.bbb.com:12345) - ) com.solacesystems.jcsmp.JCSMPErrorResponseException: 503: Message Consume Failure [Subcode:48]))
JMS configuration is as mentioned below
<int-jms:message-driven-channel-adapter
id="IdMessageDrivenChannelAdapter" send-timeout="5000"
max-messages-per-task="-1"
idle-task-execution-limit="100"
max-concurrent-consumers="2"
connection-factory="appCachedConnectionFactory" destination="appInQueue"
channel="reqChannel" error-channel="errorChannel"
acknowledge="transacted" />
Any pointers to solve this error will be really helpful.

The error indicates a failure to consume a message during a transaction. The cause of the error could be a number of different issues, such as the message was deleted/expired, or queue not found or shutdown.
You can analyze the rest of the API logs or the event logs on the Solace router to find out why the message could not be consumed.
The subcode documentation that you linked in the comments refers to the Solace .NET API. To see a list of errors and their subcodes and explanations for JCSMP errors, please see the documentation here:
http://docs.solace.com/API-Developer-Online-Ref-Documentation/java/constant-values.html

Related

How to avoid losing messages with Kafka streams

We have a streams application that consumes messages from a source topic, does some processing and forward the results to a destination topic.
The structure of the messages are controlled by some avro schemas.
When starting consuming messages if the schema is not cached yet the application will try to retrieve it from schema registry. If for whichever reason the schema registry is not available (say a network glitch) then the currently being processed message is lost because the default handler is something called LogAndContinueExceptionHandler.
o.a.k.s.e.LogAndContinueExceptionHandler : Exception caught during Deserialization, taskId: 1_5, topic: my.topic.v1, partition: 5, offset: 142768
org.apache.kafka.common.errors.SerializationException: Error retrieving Avro schema for id 62
Caused by: java.net.SocketTimeoutException: connect timed out
at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) ~[na:na]
...
o.a.k.s.p.internals.RecordDeserializer : stream-thread [my-app-StreamThread-3] task [1_5] Skipping record due to deserialization error. topic=[my.topic.v1] partition=[5] offset=[142768]
...
So my question is what would be the proper way of dealing with situations like described above and make sure you don't lose messages no matter what. Is there an out of the box LogAndRollbackExceptionHandler error handler or a way of implementing your own?
Thank you in advance for your inputs.
I've not worked a lot on Kafka, but when i did, i remember having issues such as the one you are describing in our system.
Let me tell you how we took care of our scenarios, maybe it would help you out too:
Scenario 1: If your messages are being lost at the publishing side (publisher --> kafka), you can configure Kafka acknowledgement setting according to your need, if you use spring cloud stream with kafka, the property is spring.cloud.stream.kafka.binder.required-acks
Possible values:
At most once (Ack=0)
Publisher does not care if Kafka acknowledges or not.
Send and forget
Data loss is possible
At least once (Ack=1)
If Kafka does not acknowledge, publisher resends message.
Possible duplication.
Acknowledgment is sent before message is copied to replicas.
Exactly once (Ack=all)
If Kafka does not acknowledge, publisher resends message.
However, if a message gets sent more than once to Kafka, there is no duplication.
Internal sequence number, used to decide if message has already been written on topic or not.
Min.insync.replicas property needs to be set to ensure what is the minimum number of replices that need to be synced before kafka acknowledges to the producer.
Scenario 2: If your data is being lost at the consumer side (kafka --> consumer), you can change the Auto Commit feature of Kafka according to your usage. This is the property if you are using Spring cloud stream spring.cloud.stream.kafka.bindings.input.consumer.AutoCommitOffset.
By default, AutoCommitOffset is true in kafka, and every message that is sent to the consumer is "committed" at Kafka's end, meaning it wont be sent again. However if you change AutoCommitOffset to false, you will have the power to poll the message from kafka in your code, and once you are done with your work, explicitly set commit to true to let kafka know that now you are done with the message.
If a message is not committed, kafka will keep resending it until it is.
Hope this helps you out, or atleast points you in the right direction.

ActiveMQ - cannot rollback non transaced session INVIDUAL_ACK

Is it possible to rollback async processed message in ActiveMQ? I'm consuming next message while first one is still processing, so while I'm trying to rollback the first message on another (not activemq pool) thread, I'm getting above error. Eventually should I sednd message to DLQ manually?
Message error handling can work a couple ways:
Broker-side 'redelivery policy'. Where the client invokes a rollback n number (default is usually 6 retries) of times and the broker automatically moves the message to a Dead Letter Queue (DLQ)
Client-side. Application consumes the message and then produces to the DLQ.
Option #1 is good for unplanned/planned outages-- database down, etc. Where you want automatic retry. The re-delivery policy can also be configured when the client connects to the broker.
Option #2 is good for 'bad data' scenarios where you know the message will never be able to be processed. This is ideal, because you can move the message on the 1st consumption and not have to reject the message n number of times.
When you combine infinite retry with #1 and include #2 in your application flow, you can have a robust process flow of automatic retry, and move-bad-data-out-of-the-way-quickly. Best of breed =)
ActiveMQ Redelivery policy

kafka: Commit offsets failed with retriable exception. You should retry committing offsets

[o.a.k.c.c.i.ConsumerCoordinator] [Auto offset commit failed for group
consumer-group: Commit offsets failed with retriable
exception. You should retry committing offsets.] []
Why does this error come in kafka consumer? what does this mean?
The consumer properties I am using are:
fetch.min.bytes:1
enable.auto.commit:true
auto.offset.reset:latest
auto.commit.interval.ms:5000
request.timeout.ms:300000
session.timeout.ms:20000
max.poll.interval.ms:600000
max.poll.records:500
max.partition.fetch.bytes:10485760
What is the reason for that error to come? I am guessing the consumer is doing duplicated work right now (polling same message again) because of this error.
I am neither using consumer.commitAsync() or consumer.commitSync()
Consumer gives this error in case if it catches an instance of RetriableException.
The reasons for it might be various:
if coordinator is still loading the group metadata
if the group metadata topic has not been created yet
if network or disk corruption happens, or miscellaneous disk-related or network-related IOException occurred when handling a request
if server disconnected before a request could be completed
if the client's metadata is out of date
if there is no currently available leader for the given partition
if no brokers were available to complete a request
As you can see from the list above, all these errors could be temporary issues, that is why it is suggested to retry the request.

How to configure Camel's RedeliveryPolicy retriesExhaustedLogLevel?

I have set up an errorHandler in a Camel route that will retry a message several times before sending the message to a dead letter channel (an activemq queue in this case). What I would like is to see an ERROR log when the message failed to be retried the max number of times and was then sent to the dead letter queue.
Looking at the docs for error handling and dead letter channels, it seems that there are 2 options available on the RedeliveryPolicy: retriesAttemptedLogLevel and retriesExhaustedLogLevel. Supposedly by default the retriesExhaustedLogLevel is already set at LoggingLevel.ERROR, but it does not appear to actually log anything when it has expended all retries and routes the message to the dead letter channel.
Here is my errorHandler definition via Java DSL.
.errorHandler(this.deadLetterChannel(MY_ACTIVE_MQ_DEAD_LETTER)
.useOriginalMessage()
.maximumRedeliveries(3)
.useExponentialBackOff()
.retriesExhaustedLogLevel(LoggingLevel.ERROR)
.retryAttemptedLogLevel(LoggingLevel.WARN))
I have explicitly set the level to ERROR now and it still does not appear to log out anything (to any logging level). On the other hand, retryAttemptedLogLevel is working just fine and will log to the appropriate LoggingLevel (ie, I could set retryAttemptedLogLevel to LoggingLevel.ERROR and see the retries as ERROR logs). However I only want a single ERROR log in the event of exhaustion, instead of an ERROR log for each retry when a subsequent retry could potentially succeed.
Maybe I am missing something, but it seems that the retriesExhaustedLogLevel does not do anything...or does not log anything if the ErrorHandler is configured as a DeadLetterChannel. Is there a configuration that I am still needing, or does this feature of RedeliveryPolicy not execute for this specific ErrorHandlerFactory?
I could also set up a route to send my exhausted messages that simply logs and routes to my dead letter channel, but I would prefer to try and use what is already built into the ErrorHandler if possible.
Updated the ErrorHandler's DeadLetterChannel to be a direct endpoint. Left the 2 logLevel configs the same. I got the 3 retry attempted WARN logs, but no ERROR log telling me the retries were exhausted. I did, however, set up a small route listening to the direct dead letter endpoint that logs, and that is working.
Not a direct solution to my desire to have the ERROR log work for the exhaustion, but is an acceptable workaround for now.
Please try with this code:
.errorHandler(deadLetterChannel("kafka:sample-dead-topic")
.maximumRedeliveries(4).redeliveryDelay(60000)
.retriesExhaustedLogLevel(LoggingLevel.WARN)
.retryAttemptedLogLevel( LoggingLevel.WARN)
.retriesExhaustedLogLevel(LoggingLevel.ERROR)
.logHandled(true)
.allowRedeliveryWhileStopping(true)
.logRetryStackTrace(true)
.logExhausted(true)
.logStackTrace(true)
.logExhaustedMessageBody(true)
)
retry is configured for 1 minute interval.
Camel application logged the errors for evry retry with the detailed information.

MQ JMS - messages not getting requeued getting error as MQJMS1080 No Backout-Requeue queue defined

We are using MQ JMS standalone client application (NO app server) to consume WebSphere MQ messages. Our Queue definition is as follows:
APP_QUEUE1 - (QA, PUT enabled)
APP_QUEUE1.CL - (QL and target of above APP_QUEUE1)
APP_QUEUE1_BOQ - (QA and BOQNAME of APP_QUEUE1.CL, PUT enabled)
APP_QUEUE1_BOQ.CL - (QL and target of above APP_QUEUE1_BOQ )
BOTHERH of APP_QUEUE1 = 3.
With above set up, when exception occurs for the first time, I am getting exception saying backout queue is not defined and attempt to add to dead letter queue also fails. Can someone explain why message is not getting recede to main queue (APP_QUEUE1) even though BOTHRESH is 3.
My understanding is, in case of exception, message will be recede to APP_QUEUE1 3 times and after that it will be routed to back out queue. If back out queue is full or fails then only message is added to dead letter queue.
Can someone please answer if there's anything wrong with queue definition ? Or something needs to be done in the application code ?

Categories