I have a case where I have to consume event A and do some processing, then produce the event B. So my problem is what would happen is the processing crashed and the application couldn't produce B while it consumed already A. My approach is to acknowledge after successfully publishing B, am I correct or should implement another solution for this case?
#KafkaListener(
id = TOPIC_ID,
topics = TOPIC_ID,
groupId = GROUP_ID,
containerFactory = LISTENER_CONTAINER_FACTORY
)
public void listen(List<Message<A>> messages, Acknowledgment acknowledgment) {
try {
final AEvent aEvent = messages.stream()
.filter(message -> null != message.getPayload())
.map(Message::getPayload)
.findFirst()
.get();
processDao.doSomeProcessing() // returns a Mono<Example> by calling an externe API
.subscribe(
response -> {
ProducerRecord<String, BEvent> BEventRecord = new ProducerRecord<>(TOPIC_ID, null, BEvent);
ListenableFuture<SendResult<String, BEvent>> future = kafkaProducerTemplate.send(buildBEvent());
future.addCallback(new ListenableFutureCallback<SendResult<String, BEvent>>() {
#Override
public void onSuccess(SendResult<String, BEvent> BEventSendResult) {
//TODO: do when event published successfully
}
#Override
public void onFailure(Throwable exception) {
exception.printStackTrace();
throw new ExampleException();
}
});
},
error -> {
error.printStackTrace();
throw new ExampleException();
}
);
acknowledgment.acknowledge(); // ??
} catch (ExampleException) {
exception.printStackTrace();
}
}
You can't manage kafka "acknowledgments" when using async code such as reactor.
Kafka does not manage discrete acks for each topic/partition, just the last committed offset for the partition.
If you process two records asynchronously, you will have a race as to which offset will be committed first.
You need to perform the sends on the listener container thread to maintain proper ordering.
Related
I am new to vertx and async programming.
I have 2 verticles communicating via an event bus as follows:
//API Verticle
public class SearchAPIVerticle extends AbstractVerticle {
public static final String GET_USEARCH_DOCS = "get.usearch.docs";
#Autowired
private Integer defaultPort;
private void sendSearchRequest(RoutingContext routingContext) {
final JsonObject requestMessage = routingContext.getBodyAsJson();
final EventBus eventBus = vertx.eventBus();
eventBus.request(GET_USEARCH_DOCS, requestMessage, reply -> {
if (reply.succeeded()) {
Logger.info("Search Result = " + reply.result().body());
routingContext.response()
.putHeader("content-type", "application/json")
.setStatusCode(200)
.end((String) reply.result().body());
} else {
Logger.info("Document Search Request cannot be processed");
routingContext.response()
.setStatusCode(500)
.end();
}
});
}
#Override
public void start() throws Exception {
Logger.info("Starting the Gateway service (Event Sender) verticle");
// Create a Router
Router router = Router.router(vertx);
//Added bodyhandler so we can process json messages via the event bus
router.route().handler(BodyHandler.create());
// Mount the handler for incoming requests
// Find documents
router.post("/api/search/docs/*").handler(this::sendSearchRequest);
// Create an HTTP Server using default options
HttpServer server = vertx.createHttpServer();
// Handle every request using the router
server.requestHandler(router)
//start listening on port 8083
.listen(config().getInteger("http.port", 8083)).onSuccess(msg -> {
Logger.info("*************** Search Gateway Server started on "
+ server.actualPort() + " *************");
});
}
#Override
public void stop(){
//house keeping
}
}
//Below is the target verticle should be making the multiple web client call and merging the responses
.
#Component
public class SolrCloudVerticle extends AbstractVerticle {
public static final String GET_USEARCH_DOCS = "get.usearch.docs";
#Autowired
private SearchRepository searchRepositoryService;
#Override
public void start() throws Exception {
Logger.info("Starting the Solr Cloud Search Service (Event Consumer) verticle");
super.start();
ConfigStoreOptions fileStore = new ConfigStoreOptions().setType("file")
.setConfig(new JsonObject().put("path", "conf/config.json"));
ConfigRetrieverOptions configRetrieverOptions = new ConfigRetrieverOptions()
.addStore(fileStore);
ConfigRetriever configRetriever = ConfigRetriever.create(vertx, configRetrieverOptions);
configRetriever.getConfig(ar -> {
if (ar.succeeded()) {
JsonObject configJson = ar.result();
EventBus eventBus = vertx.eventBus();
eventBus.<JsonObject>consumer(GET_USEARCH_DOCS).handler(getDocumentService(searchRepositoryService, configJson));
Logger.info("Completed search service event processing");
} else {
Logger.error("Failed to retrieve the config");
}
});
}
private Handler<Message<JsonObject>> getDocumentService(SearchRepository searchRepositoryService, JsonObject configJson) {
return requestMessage -> vertx.<String>executeBlocking(future -> {
try {
//I need to incorporate the logic here that adds futures to list and composes the compositefuture
/*
//Below is my logic to populate the future list
WebClient client = WebClient.create(vertx);
List<Future> futureList = new ArrayList<>();
for (Object collection : searchRepositoryService.findAllCollections(configJson).getJsonArray(SOLR_CLOUD_COLLECTION).getList()) {
Future<String> future1 = client.post(8983, "127.0.0.1", "/solr/" + collection + "/query")
.expect(ResponsePredicate.SC_OK)
.sendJsonObject(requestMessage.body())
.map(HttpResponse::bodyAsString).recover(error -> {
System.out.println(error.getMessage());
return Future.succeededFuture();
});
futureList.add(future1);
}
//Below is the CompositeFuture logic, but the logic and construct does not make sense to me. What goes as first and second argument of executeBlocking method
/*CompositeFuture.join(futureList)
.onSuccess(result -> {
result.list().forEach( x -> {
if(x != null){
requestMessage.reply(result.result());
}
}
);
})
.onFailure(error -> {
System.out.println("We should not fail");
})
*/
future.complete("DAO returns a Json String");
} catch (Exception e) {
future.fail(e);
}
}, result -> {
if (result.succeeded()) {
requestMessage.reply(result.result());
} else {
requestMessage.reply(result.cause()
.toString());
}
});
}
}
I was able to use the org.springframework.web.reactive.function.client.WebClient calls to compose my search result from multiple web client calls, as against using Future<io.vertx.ext.web.client.WebClient> with CompositeFuture.
I was trying to avoid mixing Springboot and Vertx, but unfortunately Vertx CompositeFuture did not work here:
//This method supplies the parameter for the future.complete(..) line in getDocumentService(SearchRepository,JsonObject)
private List<JsonObject> findByQueryParamsAndDataSources(SearchRepository searchRepositoryService,
JsonObject configJson,
JsonObject requestMessage)
throws SolrServerException, IOException {
List<JsonObject> searchResultList = new ArrayList<>();
for (Object collection : searchRepositoryService.findAllCollections(configJson).getJsonArray(SOLR_CLOUD_COLLECTION).getList()) {
searchResultList.add(new JsonObject(doSearchPerCollection(collection.toString(), requestMessage.toString())));
}
return aggregateMultiCollectionSearchResults(searchResultList);
}
public String doSearchPerCollection(String collection, String message) {
org.springframework.web.reactive.function.client.WebClient client =
org.springframework.web.reactive.function.client.WebClient.create();
return client.post()
.uri("http://127.0.0.1:8983/solr/" + collection + "/query")
.contentType(MediaType.APPLICATION_JSON)
.accept(MediaType.APPLICATION_JSON)
.body(BodyInserters.fromValue(message.toString()))
.retrieve()
.bodyToMono(String.class)
.block();
}
private List<JsonObject> aggregateMultiCollectionSearchResults(List<JsonObject> searchList){
//TODO: Search result aggregation
return searchList;
}
My use case is the second verticle should make multiple vertx web client calls and should combine the responses.
If an API call falls, I want to log the error and still continue processing and merging responses from other calls.
Please, any help on how my code above could be adaptable to handle the use case?
I am looking at vertx CompositeFuture, but no headway or useful example seen yet!
What you are looking for can done with Future coordination with a little bit of additional handling:
CompositeFuture.join(future1, future2, future3).onComplete(ar -> {
if (ar.succeeded()) {
// All succeeded
} else {
// All completed and at least one failed
}
});
The join composition waits until all futures are completed, either with a success or a failure.
CompositeFuture.join
takes several futures arguments (up to 6) and returns a future that is succeeded when all the futures are succeeded, and failed when all the futures are completed and at least one of them is failed
Using join you will wait for all Futures to complete, the issue is that if one of them fails you will not be able to obtain response from others as CompositeFuture will be failed. To avoid this you should add Future<T> recover(Function<Throwable, Future<T>> mapper) on each of your Futures in which you should log the error and pass an empty response so that the future does not fail.
Here is short example:
Future<String> response1 = client.post(8887, "localhost", "work").expect(ResponsePredicate.SC_OK).send()
.map(HttpResponse::bodyAsString).recover(error -> {
System.out.println(error.getMessage());
return Future.succeededFuture();
});
Future<String> response2 = client.post(8887, "localhost", "error").expect(ResponsePredicate.SC_OK).send()
map(HttpResponse::bodyAsString).recover(error -> {
System.out.println(error.getMessage());
return Future.succeededFuture();
});
CompositeFuture.join(response2, response1)
.onSuccess(result -> {
result.list().forEach(x -> {
if(x != null) {
System.out.println(x);
}
});
})
.onFailure(error -> {
System.out.println("We should not fail");
});
Edit 1:
Limit for CompositeFuture.join(Future...) is 6 Futures, in the case you need more you can use: CompositeFuture.join(Arrays.asList(future1, future2, future3)); where you can pass unlimited number of futures.
I would like to turn many records into one per message. I tried many things like custom reducing and aggregators, but they all still send one-to-one records back out. For example I would like to convert many strings into just one string. If my stream is messages with the same key, but different values, "the", "sky", "is", "blue", then I would like to outback back one concatenation of them in a new topic "the,sky,is,blue,". What I am instead getting is 4 messages "the,", "the, sky,", "the,sky, is,", "the,sky,is,blue,". When I send a second message to the kafka consumer, it will concatenate on the previous aggregation and I eventually receive this "the,sky,is,blue,the,sky,is,blue,"
I also tried using a custom storebuilder and changing a lot of the settings to see if that would do anything.
Map<String, String> changelogConfig = new HashMap<>();
changelogConfig.put("message.down.conversion.enable", "true");
changelogConfig.put("flush.messages", "0");
changelogConfig.put("flush.ms", "0");
StoreBuilder<KeyValueStore<String, String>> aggStoreSupplier = Stores.keyValueStoreBuilder(
Stores.persistentKeyValueStore("AggStore"),
Serdes.String(),
Serdes.String())
.withLoggingEnabled(changelogConfig);
KStream<String, String> results = source // single message get processed and eventually i get these string results I need to concatenate
.groupByKey() // this kgroupedstream has the N records, which was how many were sent in the message
.reduce(new Reducer<String>() {
#Override
public String apply(String aggValue, String value) {
return value + "," + aggValue;
}
}, Materialized.as("AggStore"))
.toStream();
results.to("results", Produced.with(Serdes.String(), Serdes.String()));
final Topology topology = builder.build(); // to describe topology
System.out.println(topology.describe()); // to print description
final KafkaStreams streams = new KafkaStreams(topology, props);
final CountDownLatch latch = new CountDownLatch(1);
// attach shutdown handler to catch control-c
Runtime.getRuntime().addShutdownHook(new Thread("streams-shutdown-hook") {
#Override
public void run() {
streams.close();
latch.countDown();
}
});
try {
streams.cleanUp();
streams.start();
latch.await();
} catch (Throwable e) {
System.exit(1);
}
System.exit(0);
In my application, on the creation of a task, I need to make an API call to Google to create a google calendar event.
I decided to make that API call on a separate thread so that our client doesn't have to wait longer for the response.
#Override
#Transactional( rollbackFor = DataException.class )
public TaskResponseBean createTask( TaskCreationBean taskCreationBean, UserAccessDetails accessDetails )
throws DataException
{
String googleEventId = "";
try
{
TaskServiceUtil.validateInputBeforeCreatingTask(taskCreationBean, accessDetails);
MatterModel matterModel = matterService.giveMatterIfExistElseThrowException(taskCreationBean.getMatterId(),
owner);
//A task is unique for a user for a matter
taskCommons.throwExceptionIfTaskNameAlreadyExistForTheMatter(taskCreationBean.getTaskName().trim(), owner,
matterModel);
TaskModel savedTask = taskModelRepository.save(savableTask);
if( !NullEmptyUtils.isNull(savableTask.getDueDate()) )
{
final CreateEventBean createEventBean = getCreateEventBean(getEventParticipants(savedTask), savedTask);
calendarTrigerer.triggerEventCreation(createEventBean, savedTask.getId(), null,
GoogleCalendarTrigerer.EVENT_TYPE_CREATE);
}
// Keep track of the list of assignees of a task
if( taskCreationBean.getHaveAssignee() || taskCreationBean.getIsSelfAssigned() )
{
saveTaskAssignedHistory(savedTask, owner, savedTask.getAssignedTo(), false);
}
}
catch( DataException e )
{
LOGGER.error(GeneralConstants.ERROR, e);
if( !NullEmptyUtils.isNullOrEmpty(googleEventId) )
{
LOGGER.info("Deleting google event id {}", googleEventId);
googleCalendarService.deleteGoogleCalendarEvent(googleEventId);
}
throw e;
}
catch( Exception e )
{
LOGGER.error(GeneralConstants.ERROR, e);
if( !NullEmptyUtils.isNullOrEmpty(googleEventId) )
{
LOGGER.info("Deleting google event id {}", googleEventId);
googleCalendarService.deleteGoogleCalendarEvent(googleEventId);
}
throw new DataException(GeneralConstants.EXCEPTION, GeneralConstants.SOMETHING_WENT_WRONG,
HttpStatus.INTERNAL_SERVER_ERROR);
}
}
#Async
void triggerEventCreation( CreateEventBean createEventBean, Long taskId, String eventId, String eventType )
throws DataException
{
try
{
TaskModel taskModel = null;
if( !NullEmptyUtils.isNullOrEmpty(taskId) )
{
int retryCount = 0;
Optional<TaskModel> taskModelOptional = taskModelRepository.findByIdAndIsActiveTrue(taskId);
while( !taskModelOptional.isPresent() )
{
System.out.println("NOT PRESENT***********************************************");
taskModelOptional = taskModelRepository.findByIdAndIsActiveTrue(taskId);
if( retryCount++ > 50 )
{
throw new DataException(GeneralConstants.EXCEPTION, "Transaction is unable to commit",
HttpStatus.INTERNAL_SERVER_ERROR);
}
}
taskModel = taskModelOptional.get();
}
switch ( eventType )
{
case EVENT_TYPE_CREATE :
eventId = googleCalendarService.addGoogleCalendarEvent(createEventBean);
System.out.println("ADDED EVENT***********************************************" + eventId);
System.out.println("PRESENT***********************************************");
taskModel.setGoogleEventId(eventId);
taskModelRepository.save(taskModel);
break;
case EVENT_TYPE_DELETE :
NullEmptyUtils.throwExceptionIfInputIsNullOrEmpty(eventId);
googleCalendarService.deleteGoogleCalendarEvent(eventId);
taskModel.setGoogleEventId(null);
taskModel.setIsActive(false);
taskModelRepository.save(taskModel);
break;
case EVENT_TYPE_UPDATE :
NullEmptyUtils.throwExceptionIfInputIsNullOrEmpty(eventId);
NullEmptyUtils.throwExceptionIfInputIsNullOrEmpty(createEventBean);
taskModel.setGoogleEventId(
googleCalendarService.updateGoogleCalendarEvent(eventId, createEventBean));
taskModelRepository.save(taskModel);
break;
default :
throw new DataException(GeneralConstants.EXCEPTION, "Invalid eventType", HttpStatus.BAD_REQUEST);
}
}
catch( DataException e )
{
log.error(GeneralConstants.ERROR, e);
throw e;
}
catch( Exception e )
{
log.error(GeneralConstants.ERROR, e);
throw new DataException(GeneralConstants.EXCEPTION,
"Something went wrong while trigering create event action", HttpStatus.INTERNAL_SERVER_ERROR);
}
}
How I solved (make sure it works) while creating an event
In the separate thread, I will iterate and wait till the task gets created and then update it with the event id and save it.
But, a new problem arises, when updating the task, I already have its details in the database. In the update method, I will set updated values to TaskModel and do taskModelRepo.save() and in a separate thread I am calling google calendar update API and after successful API call, I have to update the corresponding TaskModel and save it.
The issue here is, sometimes when I fetch task by id after google API call is successful, I will get the TaskModel with non-updated values as the previous transaction is not committed yet.
So How to ensure the new thread runs only after the transaction of the method from which it is called is committed?
you can use the Google Guava Event Bus to solve this problem. It's a publish-subscribe model in which the producer is responsible for emitting the events, these events are then passed on to the event bus and are sent to all listeners that are subscribed to that event.
The listener, subscribes to an event and it is triggered when that event is posted from the producer, you could have a listener method run Synchronously or Asynchronously depending on the kind of event bus you use.
Here is the link : https://github.com/google/guava/wiki/EventBusExplained
I am trying to understand ReactiveX using RxJava but I can't get the whole Reactive idea. My case is the following:
I have Task class. It has perform() method which is executing an HTTP request and getting a response through executeRequest() method. The request may be executed many times (defined number of repetitions). I want to grab all the results of executeRequest() and combine them into Flowable data stream so I can return this Flowable in perform() method. So in the end I want my method to return all results of the requests that my Task executed.
executeRequest() returns Single because it executes only one request and may provide only one response or not at all (in case of timeout).
In perform() I create Flowable range of numbers for each repetition. Subscribed to this Flowable I execute a request per repetition. I additionally subscribe to each response Single for logging and gathering responses into a collection for later. So now I have a set of Singles, how can I merge them into Flowable to return it in perform()? I tried to mess around with operators like merge() but I don't understand its parameters types.
I've read some guides on the web but they all are very general or don't provide examples according to my case.
public Flowable<HttpClientResponse> perform() {
Long startTime = System.currentTimeMillis();
List<HttpClientResponse> responses = new ArrayList<>();
List<Long> failedRepetitionNumbers = new ArrayList<>();
Flowable.rangeLong(0, repetitions)
.subscribe(repetition -> {
logger.debug("Performing repetition {} of {}", repetition + 1, repetitions);
Long currentTime = System.currentTimeMillis();
if (durationCap == 0 || currentTime - startTime < durationCap) {
Single<HttpClientResponse> response = executeRequest(method, url, headers, body);
response.subscribe(successResult -> {
logger.info("Received response with code {} in the {}. repetition.", successResult
.statusCode(), repetition + 1);
responses.add(successResult);
},
error -> {
logger.error("Failed to receive response from {}.", url);
failedRepetitionNumbers.add(repetition);
});
waitInterval(minInterval, maxInterval);
} else {
logger.info("Reached duration cap of {}ms for task {}.", durationCap, this);
}
});
return Flowable.merge(???);
}
And executeRequest()
private Single<HttpClientResponse> executeRequest(HttpMethod method, String url, LinkedMultiValueMap<String, String>
headers, JsonNode body) {
CompletableFuture<HttpClientResponse> responseFuture = new CompletableFuture<>();
HttpClient client = vertx.createHttpClient();
HttpClientRequest request = client.request(method, url, responseFuture::complete);
headers.forEach(request::putHeader);
request.write(body.toString());
request.setTimeout(timeout);
request.end();
return Single.fromFuture(responseFuture);
}
Instead of subscribing to each observable(each HTTP request) within your perform method, Just keep on chaining the observables like this. Your code can be reduced to something like.
public Flowable<HttpClientResponse> perform() {
// Here return a flowable , which can emit n number of times. (where n = your number of HTTP requests)
return Flowable.rangeLong(0, repetitions) // start a counter
.doOnNext(repetition -> logger.debug("Performing repetition {} of {}", repetition + 1, repetitions)) // print the current count
.flatMap(count -> executeRequest(method, url, headers, body).toFlowable()) // get the executeRequest as Flowable
.timeout(durationCap, TimeUnit.MILLISECONDS); // apply a timeout policy
}
And finally, you can subscribe to the perform at the place where you actually need to execute all this, As shown below
perform()
.subscribeWith(new DisposableSubscriber<HttpClientResponse>() {
#Override
public void onNext(HttpClientResponse httpClientResponse) {
// onNext will be triggered each time, whenever a request has executed and ready with result
// if you had 5 HTTP request, this can trigger 5 times with each "httpClientResponse" (if all calls were success)
}
#Override
public void onError(Throwable t) {
// any error during the execution of these request,
// including a TimeoutException in case timeout happens in between
}
#Override
public void onComplete() {
// will be called finally if no errors happened and onNext delivered all the results
}
});
I have following configuration for creation of two channels (by using the JmsChannelFactoryBean):
#Bean
public JmsChannelFactoryBean jmsChannel(ActiveMQConnectionFactory activeMQConnectionFactory) {
JmsChannelFactoryBean fb = new JmsChannelFactoryBean(true);
fb.setConnectionFactory(activeMQConnectionFactory);
fb.setDestinationName("something.queue");
fb.setErrorHandler(t -> log.error("something went wrong on jms channel", t));
return fb;
}
#Bean
public JmsChannelFactoryBean jmsChannelDLQ(ActiveMQConnectionFactory activeMQConnectionFactory) {
JmsChannelFactoryBean fb = new JmsChannelFactoryBean(true);
fb.setConnectionFactory(activeMQConnectionFactory);
fb.setDestinationName("something.queue.DLQ");
fb.setErrorHandler(t -> log.error("something went wrong on jms channel", t));
return fb;
}
The something.queue is configured to put the dead letter on something.queue.DLQ. Im using mostly Java DSL to configure the app, and if possible - would like to keep this.
Case is: the message is taken from jmsChannel put to sftp outbound gateway, if there is a problem on sending the file, the message is put back into the jmsChannel as not delivered. After some retries it is designed as poisonus, and put to something.queue.DLQ.
Is it possbile to have the info on error channel when that happens?
What is best practice to handle errors when using JMS backed message channels?
EDIT 2
The integration flow is defined as:
IntegrationFlows.from(filesToProcessChannel).handle(outboundGateway)
Where filesToProcessChannel is the JMS backed channel and outbound gateway is defined as:
#Bean
public SftpOutboundGateway outboundGateway(SftpRemoteFileTemplate sftpRemoteFileTemplate) {
SftpOutboundGateway gateway = new SftpOutboundGateway(sftpRemoteFileTemplate, AbstractRemoteFileOutboundGateway.Command.PUT.getCommand(), EXPRESSION_PAYLOAD);
ArrayList<Advice> adviceChain = new ArrayList<>();
adviceChain.add(errorHandlingAdvice());
gateway.setAdviceChain(adviceChain);
return gateway;
}
Im trying to grab exception using advice:
#Bean
public Advice errorHandlingAdvice() {
RequestHandlerRetryAdvice advice = new RequestHandlerRetryAdvice();
RetryTemplate retryTemplate = new RetryTemplate();
SimpleRetryPolicy retryPolicy = new SimpleRetryPolicy();
retryPolicy.setMaxAttempts(1);
retryTemplate.setRetryPolicy(retryPolicy);
advice.setRetryTemplate(retryTemplate);
advice.setRecoveryCallback(new ErrorMessageSendingRecoverer(filesToProcessErrorChannel));
return advice;
}
Is this the right way?
EDIT 3
There is certanly something wrong with SFTPOutboundGateway and advices (or with me :/):
I used the folowing advice from the spring integration reference:
#Bean
public Advice expressionAdvice() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setSuccessChannelName("success.input");
advice.setOnSuccessExpressionString("payload + ' was successful'");
advice.setFailureChannelName("failure.input");
advice.setOnFailureExpressionString(
"payload + ' was bad, with reason: ' + #exception.cause.message");
advice.setTrapException(true);
return advice;
}
#Bean
public IntegrationFlow success() {
return f -> f.handle(System.out::println);
}
#Bean
public IntegrationFlow failure() {
return f -> f.handle(System.out::println);
}
And when I use :
return IntegrationFlows.from(filesToProcessChannel)
.handle((GenericHandler<File>) (payload, headers) -> {
if (payload.equals("x")) {
return null;
}
else {
throw new RuntimeException("some failure");
}
}, spec -> spec.advice(expressionAdvice()))
It gets called, and i get error message printed out (and that is expected), but when I try to use:
return IntegrationFlows.from(filesToProcessChannel)
.handle(outboundGateway, spec -> spec.advice(expressionAdvice()))
The advice is not called, and the error message is put back to JMS.
The app is using Spring Boot v2.0.0.RELEASE, Spring v5.0.4.RELEASE.
EDIT 4
I managed to resolve the advice issue using following configuration, still don't understand why the handler spec will not work:
#Bean
IntegrationFlow files(SftpOutboundGateway outboundGateway,
...
) {
return IntegrationFlows.from(filesToProcessChannel)
.handle(outboundGateway)
...
.log(LoggingHandler.Level.INFO)
.get();
}
#Bean
public SftpOutboundGateway outboundGateway(SftpRemoteFileTemplate sftpRemoteFileTemplate) {
SftpOutboundGateway gateway = new SftpOutboundGateway(sftpRemoteFileTemplate, AbstractRemoteFileOutboundGateway.Command.PUT.getCommand(), EXPRESSION_PAYLOAD);
ArrayList<Advice> adviceChain = new ArrayList<>();
adviceChain.add(expressionAdvice());
gateway.setAdviceChain(adviceChain);
return gateway;
}
#Bean
public ExpressionEvaluatingRequestHandlerAdvice expressionAdvice() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setSuccessChannelName("success.input");
advice.setOnSuccessExpressionString("payload + ' was successful'");
advice.setFailureChannelName("failure.input");
advice.setOnFailureExpressionString(
"payload + ' was bad, with reason: ' + #exception.cause.message");
advice.setTrapException(true);
return advice;
}
#Bean
public IntegrationFlow success() {
return f -> f.handle(System.out::println);
}
#Bean
public IntegrationFlow failure() {
return f -> f.handle(System.out::println);
}
Since the movement to the DLQ is performed by the broker, the application has no mechanism to log the situation - it is not even aware that it happened.
You would have to catch the exceptions yourself and publish the message the the DLQ yourself, after some number of attempts (JMSXDeliveryCount header), instead of using the broker policy.
EDIT
Add an Advice to the .handle() step.
.handle(outboundGateway, e -> e.advice(myAdvice))
Where myAdvice implements MethodInterceptor.
In the invoke method, after a failure, you can check the delivery count header and, if it exceeds your threshold, publish the message to the DLQ (e.g. send it to another channel that has a JMS outbound adapter subscribed) and log the error; if the threshold has not been exceeded, simply return the result of the invocation.proceed() (or rethrow the exception).
That way, you control publishing to the DLQ rather than having the broker do it. You can also add more information, such as the exception, to headers.
EDIT2
You need something like this
public class MyAdvice implements MethodInterceptor {
#Autowired
private MessageChannel toJms;
public Object invoke(MethodInvocation invocation) throws Throwable {
try {
return invocation.proceed();
}
catch Exception(e) {
Message<?> message = (Message<?>) invocation.getArguments()[0];
Integer redeliveries = messasge.getHeader("JMXRedeliveryCount", Integer.class);
if (redeliveries != null && redeliveries > 3) {
this.toJms.send(message); // maybe rebuild with additional headers about the error
}
else {
throw e;
}
}
}
}
(it should be close, but I haven't tested it). It assumes your broker populates that header.