We are using java configuration for producer to achieve asynchronous retry mechanism
and on the consumer side we are using message driven adapter(xml configuration).
identify that in the producer side we have
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,MessageSerializer.class.getName());
on producer ,we dont have option for setting the encoder implementation as it only support the serializer interface implementation
MessageSerializer implements Serializer
and on the consumer side we don't have deserialize option as it only support decoder interface implementation:
<int-kafka:message-driven-channel-adapter
id="inAdapter"
channel="fromKafka"
connection-factory="connectionFactory"
key-decoder="kafkaKeyDecoder"
payload-decoder="kafkaDecoder"
topics="${topic.list}"
offset-manager="offsetManager"/>
kafkaDecoder implements Decoder
Therfore getting error during serialization,can you please suggest on how to handle this.
You are using an old version of spring-integration-kafka; it is not configured that way anymore; the current version is 2.1.0 and it sits on top of spring-kafka 1.1.2.
The integration components are documented in the spring-kafka reference. Configuring Spring Kafka itself is elsewhere in that book.
Related
I am using Spring Cloud Stream and Kafka Binder to consume messages in batches from a Kafka Topic. I am trying to implement an error handling mechanism. As per my understanding I can't use Spring Cloud Stream's enableDLQ property in batch mode.
I have found RecoveringBatchErrorHandler and DeadLetterPublishingRecoverer to retry and send failure messages from the spring-kafka documentation. But I am not able to understand how to send the records to a custom DLQ topic following the functional programming standards. All the examples I can see is using KafkaTemplates.
Are there any good example where I can find the implementation?
This is the spring doc I have been referring to.
https://docs.spring.io/spring-kafka/docs/2.5.12.RELEASE/reference/html/#recovering-batch-eh
That version is no longer supported as OSS https://spring.io/projects/spring-kafka#support
With the current version, use the DefaultErrorHandler configured with a DeadLetterPublishingRecoverer and throw a BatchListenerExcecutionFailedException to tell the framework which record in the batch failed.
See https://docs.spring.io/spring-kafka/docs/current/reference/html/#annotation-error-handling and https://docs.spring.io/spring-kafka/docs/current/reference/html/#dead-letters and https://docs.spring.io/spring-kafka/docs/current/reference/html/#legacy-eh
Add a ListenerContainerCustomizer bean to add your configured error handler to the listener container.
I've started to use Axon 4.3.1 (latest version) in my project and having a problem.
Where can I config the kafka retry policies after #eventhandler throw an exception?
OBS: I'm using SubscribingEventProcessor type as event processor (both projects). I'm using separate projects! Command model use mongo and publish events on Kafka. Query model consume events from Kafka (eventbus). In this way, using separate JVMs.
#processinggroup(event-processor) is configured to class with event-handler method. I'd like to have a config to Kafka auto retry after some time in error cases (from query model project).
Can I use some default Axon component? Could I use something like spring-retry or internal kafka configs itself?
I've found something like that (documentation):
https://docs.axoniq.io/reference-guide/configuring-infrastructure-components/event-processing/event-processors#error-handling
"Based on the provided ErrorContext object, you can decide to ignore the error, schedule retries, perform dead-letter-queue delivery or rethrow the exception."
How can I config (for example, schedule retries) on #eventhandler after errors?
Could you help me?
Thanks.
The current implementation of Axon's Kafka Extension (version 4.0-M2) does not support setting a retry policy when it comes to event handling.
I'd argue your best approach right now is to set up something like that on Kafka, if that's even possible. Otherwise, forcing a replay of the events through Kafka would be your best approach.
The use case is following.
I am passing producer or consumer reference over many objects instances in Java code.
At some of them I would like to do some checks for the Kafka configuration.
It means I would like to get back, what effective configuration is stored in Kafka Producer/Consumer (including defaults).
I do not see anthing explicit in java docs:
KafkaProducer
KafkaConsumer
So, how to get back Kafka producer and consumer configuration?
Unfortunately it's not possible. I have to admit it could be a useful feature for showing the "core" configuration properties at least (avoiding the possibility to get the "secrets" for authentication stuff for example).
The only solution that I see today for you is to have a link between the consumer/producer instance and the related properties bag used for setting the client configuration. I understand it's a waste of memory because such configuration is internally in the client but you need to keep your properties bag for having it.
I would like to use the new Connector strategy within Apache Camel 2.19.x to use the Restlet Producer to connect to a JasperServer instance on a scheduled basis to download certain reports.
Basically what I would like to do is convert the following:-
from("timer://runOnce?repeatCount=1&delay=5000")
.setHeader(RestletConstants.RESTLET_LOGIN).simple("jasperadmin") .setHeader(RestletConstants.RESTLET_PASSWORD).simple("jasperadmin")
.to("restlet:http://localhost:8181/jasperserver/rest_v2/reports/reports/interactive/MapReport.pdf?restletMethods=get").to("file:C:/tmp/camel")
to
from("jasper-server").to("file:C:/tmp/camel")
The problem is that the RestletComponent sets up the RestletConsumer by default and I am not sure how to set it into Producer mode using an component option or whether I should use the SchedulerComponent as my base and then somehow integrate the Restlet functionality into the component. Would it be better to use the HttpComponent as the base component instead?
I haven't really used RestletComponet, but I managed a similar route to yours using http4 Component like:
from("timer://").to("direct:http-endpoint");
to("direct:http-endpoint").to("restlet://...")
I believe this is what is described in Restlet Component docs
I am using java DSL to configure my channel adapters. The thing I want to achieve can be described with the following piece of code:
IntegrationFlows
.from(Jms.messageDriverChannelAdapter(mqCacheConnectionFactory)
.configureListenerContainer(container -> container.sessionTransacted(transacted))
.destinations(inputDestination1, inputDestination2) // missing method
.autoStartup(autoStartup)
.id(channelName)
.errorChannel(errorChannel)
)
.channel(commonChannel)
.get();
So I would like to have messageDriverChannelAdapter that would be capable of receiving from multiple JMS destinations. Is it achievable?
No, it isn't possible.
The Spring Integration JMS support is fully based on the Spring JMS foundation. And its AbstractMessageListenerContainer provides ability to consume only one destination. Therefore Jms.messageDriverChannelAdapter() doesn't provide an option to configure several destinations to listen to.
Only option you have is configure several Jms.messageDriverChannelAdapter()s. What is good with Spring Integration that you can output them all to the same MessageChannel and you won't have so much copy/paste hell.