Axon Config - Kafka retry policies after #eventhandlers thrown exception - java

I've started to use Axon 4.3.1 (latest version) in my project and having a problem.
Where can I config the kafka retry policies after #eventhandler throw an exception?
OBS: I'm using SubscribingEventProcessor type as event processor (both projects). I'm using separate projects! Command model use mongo and publish events on Kafka. Query model consume events from Kafka (eventbus). In this way, using separate JVMs.
#processinggroup(event-processor) is configured to class with event-handler method. I'd like to have a config to Kafka auto retry after some time in error cases (from query model project).
Can I use some default Axon component? Could I use something like spring-retry or internal kafka configs itself?
I've found something like that (documentation):
https://docs.axoniq.io/reference-guide/configuring-infrastructure-components/event-processing/event-processors#error-handling
"Based on the provided ErrorContext object, you can decide to ignore the error, schedule retries, perform dead-letter-queue delivery or rethrow the exception."
How can I config (for example, schedule retries) on #eventhandler after errors?
Could you help me?
Thanks.

The current implementation of Axon's Kafka Extension (version 4.0-M2) does not support setting a retry policy when it comes to event handling.
I'd argue your best approach right now is to set up something like that on Kafka, if that's even possible. Otherwise, forcing a replay of the events through Kafka would be your best approach.

Related

Error handling in Spring Cloud Stream Kafka in Batch mode

I am using Spring Cloud Stream and Kafka Binder to consume messages in batches from a Kafka Topic. I am trying to implement an error handling mechanism. As per my understanding I can't use Spring Cloud Stream's enableDLQ property in batch mode.
I have found RecoveringBatchErrorHandler and DeadLetterPublishingRecoverer to retry and send failure messages from the spring-kafka documentation. But I am not able to understand how to send the records to a custom DLQ topic following the functional programming standards. All the examples I can see is using KafkaTemplates.
Are there any good example where I can find the implementation?
This is the spring doc I have been referring to.
https://docs.spring.io/spring-kafka/docs/2.5.12.RELEASE/reference/html/#recovering-batch-eh
That version is no longer supported as OSS https://spring.io/projects/spring-kafka#support
With the current version, use the DefaultErrorHandler configured with a DeadLetterPublishingRecoverer and throw a BatchListenerExcecutionFailedException to tell the framework which record in the batch failed.
See https://docs.spring.io/spring-kafka/docs/current/reference/html/#annotation-error-handling and https://docs.spring.io/spring-kafka/docs/current/reference/html/#dead-letters and https://docs.spring.io/spring-kafka/docs/current/reference/html/#legacy-eh
Add a ListenerContainerCustomizer bean to add your configured error handler to the listener container.

Spring Boot JMS - when should a message be sent on a #Transacted method?

I was wondering on a Spring Boot method marked as #Transactional, when should a message appear on the queue? I’m asking because I’ve noticed 2 different behaviours on 2 different applications.
Both applications have the following in common:
Spring Boot 2.0 based
JMS message sending is using JmsTemplate, with setSessionTransacted set to true
No explicit transaction settings configured
There is a Mongo DB used (using Spring Data) and a record is being modified in the same method as the message is sent
The major difference between the two applications is:
One has a JPA data source (using Spring Dataj and a record is read and/or written in this method. The other application does not have this data source.
The difference in observed behaviour is that when the JPA source is present, the message is sent at the end of method. Without it, the message is sent immediately.
Is this the expected behaviour?
Can I configure the applications to behave the same way? Ideally I’d like the message to be sent at the end (so any Mongo changes that fail would cancel the message send and rollback any JPA changes made)? I realise that Mongo changes are not part of any transaction created.
Thanks
With JMS and a DB you have two resources.
To have a full transactional behavior you need distributed transactions support.
If you don't have this even when the message is sent as last operation if the sending fails the data is changed in the database anyway.
To configure distributed transaction you need JTA. This is described here:
With JMS and a DB you have two resources. To have a full transactional behavoir you need distributed transactions. We use Bistronix in our application and this works very well.
Have a look at the docs: https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-jta.html

How to get back Kafka producer and consumer configuration (Java API)?

The use case is following.
I am passing producer or consumer reference over many objects instances in Java code.
At some of them I would like to do some checks for the Kafka configuration.
It means I would like to get back, what effective configuration is stored in Kafka Producer/Consumer (including defaults).
I do not see anthing explicit in java docs:
KafkaProducer
KafkaConsumer
So, how to get back Kafka producer and consumer configuration?
Unfortunately it's not possible. I have to admit it could be a useful feature for showing the "core" configuration properties at least (avoiding the possibility to get the "secrets" for authentication stuff for example).
The only solution that I see today for you is to have a link between the consumer/producer instance and the related properties bag used for setting the client configuration. I understand it's a waste of memory because such configuration is internally in the client but you need to keep your properties bag for having it.

Logging aspect for exceptions with JMS retry mechansim

In our project we use Activemq (jms template) - to publish many events from one webapp to another.
we use logging aspect (spring aop) as well - mainly we log errors and entering\quitting methods.
Now, sometimes we face racing conditions on the flow of the systems. i.e. an entity is being created on one web app, an event is fired to notify another webapp, but the handling of the other webapp requires a different event to be handled first, so if such scenario happens, the handling fails (for example, an id is missing ) and immediately retried (jms re-delivery), on the 2nd time of the retry its usually works (never more then 3 retries are required).
So basically, we have exceptions as part of our day to day flow, but:
our log files are huge and messy because of the exceptions thrown by such scenarios, any idea how can we not log the first few retries exceptions and only on a later exception we will log? Maybe another approach you can recommend?
Thanks.
You can use JMSXDeliveryCount property of Message to get the redelivery count. See http://activemq.apache.org/activemq-message-properties.html

Akka Persistence - issue with Recovery - how to diagnose?

I am using the java version of akka-persistence 2.3.4 where I have several actors which extend PersistentActor and which handle both RecoveryCompleted and RecoveryFailure. I am using the default journaling and snapshot plugins.
I'm running into this an issue with recovery where on some of my actors I get neither a RecoveryCompleted nor RecoveryFailure message and the actor gets stuck.
What kind of diagnostics can I use to figure out why this may be happening?
I have tried turning on debug logging but no akka logging shows up there.

Categories