Auto Delete Messages from Queue Once consumed using Spring AMQP - java

I am having 2 applications exchanging data using RabbitMQ. I have implemented this using Spring AMQP. I have scenario once the message has been consumed from the consumer might encounter an exception while processing.
If any exception comes i am planning to log into the database. I have to remove message from the queue explicitly once the message reaches the consumer whether it is successful processing or error encountered.
How to forcefully remove the message from queue otherwise it will be
there if my application fails to process it?
Below is my Listener code
#RabbitListener(containerFactory="rabbitListenerContainerFactory",queues=Constants.JOB_QUEUE)
public void handleMessage(JobListenerDTO jobListenerDTO) {
//System.out.println("Received summary: " + jobListenerDTO.getProcessXML());
//amqpAdmin.purgeQueue(Constants.JOB_QUEUE, true);
try{
Map<String, Object> variables = new HashMap<String, Object>();
variables.put("initiator", "cmy5kor");
Deployment deploy = repositoryService.createDeployment().addString(jobListenerDTO.getProcessId()+".bpmn20.xml",jobListenerDTO.getProcessXML()).deploy();
ProcessInstance processInstance = runtimeService.startProcessInstanceByKey(jobListenerDTO.getProcessId(), variables);
System.out.println("Process Instance is:::::::::::::"+processInstance);
}catch(Exception e){
e.printStackTrace();
}
Configuration Code
#Configuration
#EnableRabbit
public class RabbitMQJobConfiguration extends AbstractBipRabbitConfiguration {
#Bean
public RabbitTemplate rabbitTemplate() {
RabbitTemplate template = new RabbitTemplate(connectionFactory());
template.setQueue(Constants.JOB_QUEUE);
template.setMessageConverter(jsonMessageConverter());
return template;
}
#Bean
public Queue jobQueue() {
return new Queue(Constants.JOB_QUEUE);
}
#Bean(name="rabbitListenerContainerFactory")
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory() {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
Jackson2JsonMessageConverter messageConverter = new Jackson2JsonMessageConverter();
DefaultClassMapper classMapper = new DefaultClassMapper();
Map<String, Class<?>> idClassMapping = new HashMap<String, Class<?>>();
idClassMapping.put("com.bosch.diff.approach.TaskMessage", JobListenerDTO.class);
classMapper.setIdClassMapping(idClassMapping);
messageConverter.setClassMapper(classMapper);
factory.setMessageConverter(messageConverter);
factory.setReceiveTimeout(10L);
return factory;
}
}

I don't know about spring api or configuration for rmq but this
 I have to remove message from the queue explicitly once the message reaches the consumer whether it is successful processing or error encountered.
is exactly what is happening when you set the auto-acknowledge flag. In that way, the message is acknowledged as soon as it's consumed - so gone from the queue.

As long as your listener catches the exception the message will be removed from the queue.
If your listener throws an exception, it will be requeued by default; that behavior can be modified by throwing a AmqpRejectAndDontRequeueException or setting the defaultRequeueRejected property - see the documentation for details.

Related

Kafka reading old and new value from topic

We have one producer-consumer environment, we are using Spring Boot for our project.
Kafka configuration was done by using class
#Configuration
#EnableKafka
public class DefaultKafkaConsumerConfig {
#Value("${spring.kafka.bootstrap-servers}")
private String bootstrapServers;
#Value("${spring.kafka.bootstrap-servers-group}")
private String bootstrapServersGroup;
#Bean
public ConsumerFactory<String,String> consumerDefaultFactory(){
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, IntegerDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, bootstrapServersGroup);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);
return new DefaultKafkaConsumerFactory<>(props);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerDefaultContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerDefaultFactory());
return factory;
}
}
SCENARIO : We are writing some values on Kafka topics. Consider we have some topic where we are putting live data. Which have status like "live:0" for completed event and "live:1" for live event. Now when event going to be live it will get update and write on topic, and depending on this topic we are processing event.
ISSUE : When event get live I read data from topic with "live:1" and processed. But when event got updated and new data updated in topic.
Here now when new data updated on topic I am able to read those data. But with new data on topic, I am receiving old data too. Because I am getting both old and new data same time my event got affected. Some time it goes live some time in completed.
Anyone give any suggestions here on this?
Why I am getting committed data and newly updated data?
Any thing I am missing here in configuration?
you may want to check the couple of things:
-1. number of partitions
2. number of consumer
does it also means that you are re-writing the consume message to topic again, with new status?
try {
ListenableFuture<SendResult<String, String>> futureResult = this.kafkaTemplate.send(topicName, message);
futureResult.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {
#Override
public void onSuccess(SendResult<String, String> result) {
log.info("Message successfully sent to topic {} with offset {} ", result.getRecordMetadata().topic(), result.getRecordMetadata().offset());
}
#Override
public void onFailure(Throwable ex) {
FAILMESSAGELOGGER.info("{},{}", topicName, message);
log.info("Unable to send Message to topic {} due to ", topicName, ex);
}
});
} catch (Exception e) {
log.error("Outer Exception occured while sending message {} to topic {}", new Object[] { message, topicName, e });
FAILMESSAGELOGGER.info("{},{}", topicName, message);
}
This what we have.

Remove "ActiveMQ.Advisory.Producer.x" prefix

Problem:
Somehow producer is sending event to "ActiveMQ.Advisory.Producer.Queue.Queue" instead of "Queue"
Active-MQ admin console in Topics section Screenshot with producer-queue: (Not sure why it has queue and 0 consumers and number of message enqueued = 38)
Active-MQ admin console in Queues section Screenshot with consumer-queue: (it shows consumers = 1 but number of message enqueued = 0)
Attaching Producer, Consumer and Config code.
Producer
public void sendMessage(WorkflowRun message){
var queue = "Queue";
try{
log.info("Attempting Send message to queue: "+ queue);
jmsTemplate.convertAndSend(queue, message);
} catch(Exception e){
log.error("Recieved Exception during send Message: ", e);
}
}
Listener
#JmsListener(destination = "Queue")
public void messageListener(SystemMessage systemMessage) {
LOGGER.info("Message received! {}", systemMessage);
}
Config
#Value("${spring.active-mq.broker-url}")
private String brokerUrl;
#Bean
public ConnectionFactory connectionFactory() throws JMSException {
ActiveMQConnectionFactory activeMQConnectionFactory = new ActiveMQConnectionFactory();
activeMQConnectionFactory.setBrokerURL(brokerUrl);
activeMQConnectionFactory.setWatchTopicAdvisories(false);
activeMQConnectionFactory.createQueueConnection(ActiveMQConnectionFactory.DEFAULT_USER,
ActiveMQConnectionFactory.DEFAULT_PASSWORD);
return activeMQConnectionFactory;
}
When your producer starts, the ActiveMQ broker produces an 'Advisory Message' and sends it to that topic. The count indicates how many producers have been created for the queue://Queuee-- in this case 38 producers have been created.
Since the message is not being produced, it appears that in your Spring wiring, you have the connection, session and producer objects being created-- but the messages are not being sent.
Additionally, if you are showing queue://ActiveMQ.Advisory.. showing up you probably have a bug in some other part of the app (or monitoring tool?) that should be configured to consume from topic://ActiveMQ.Advisory.. instead of queue://

RabbitHandler: How to catch "ListenerExecutionFailedException: Listener method 'no match' threw exception" correctly and proceed working

For an application I am doing some tests with Spring Boot and RabbitMQ.
I set up a very simple Sender - Receiver application:
Sender:
public class Tut1Sender
{
private final Gson gson = new Gson();
#Autowired
private RabbitTemplate template;
#Autowired
private Queue queue;
public static int count = 1;
#Scheduled(fixedDelay = 1000, initialDelay = 500)
public void send() throws InterruptedException
{
String message = "Hello World! "+" Nr. "+count;
MessageObject mo = new MessageObject(message);
String toJson = gson.toJson(mo);
this.template.convertAndSend(queue.getName(), toJson);
System.out.println(" [x] Sent '" + toJson + "'");
Thread.sleep(5);
count++;
}
}
This part works just fine and fill my queue with messages.
Here is my receiver:
#RabbitListener(queues = "hello")
public class Tut1Receiver
{
private final Gson gson = new Gson();
#RabbitHandler
public void receive(String in) throws InterruptedException
{
System.out.println("Received Raw: " + in);
MessageObject fromJson = gson.fromJson(in, MessageObject.class);
System.out.println("Received Message '" + fromJson + "'");
int nextInt = ThreadLocalRandom.current().nextInt(1000, 5000);
System.out.println("Sleep for " + nextInt + " ms");
Thread.sleep(nextInt);
}
}
Messages created by the Sender are handled correctly by the receiver. I get a nice output, the message is acknowledged and deleted from the queue.
Then I put a message directly into the queue by the Web-GUI of RabbitMQ.
The sender grabs this message. I can say this because the message created by me switched from status "Ready" to "Unacked" (as displayed in Web-GUI)
The sender gave me no output.
Then I configured the ContainerFactory:
#Profile("receiver")
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(
SimpleRabbitListenerContainerFactoryConfigurer configurer,
ConnectionFactory connectionFactory)
{
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
configurer.configure(factory, connectionFactory);
factory.setErrorHandler(e -> {
System.out.println("Error: "+e);
System.out.println("Raw: "+((ListenerExecutionFailedException) e).getFailedMessage().toString());
});
return factory;
}
Now I am getting the following error (in an endless loop)
Error: org.springframework.amqp.rabbit.listener.exception.ListenerExecutionFailedException: Listener method 'no match' threw exception
Raw: (Body:'[B#53452feb(byte[11])' MessageProperties [headers={content_type=text/plain, content_encoding=UTF-8}, contentLength=0, receivedDeliveryMode=PERSISTENT, priority=0, redelivered=true, receivedExchange=, receivedRoutingKey=hello, deliveryTag=1, consumerTag=NOT_SET, consumerQueue=hello])
How can I handle this error? The sender should just display the error, acknowledging the message and proceed with the next message.
What is the right way to handle faulty messages in general?
For broken message, consumers can reject or deliver the message. If you are sure the broken message can't be processed by any other consumers, you should tell the broker to discard the message or deliver it to a dead-letter-exchange.
From official document of spring amqp, I find:
Another alternative is to set the container's rejectRequeued property to false. This causes all failed messages to be discarded. When using RabbitMQ 2.8.x or higher, this also facilitates delivering the message to a Dead Letter Exchange.
Or, you can throw a AmqpRejectAndDontRequeueException; this prevents message requeuing, regardless of the setting of the rejectRequeued property.

Timeout waiting for connection from pool while polling S3 for Objects

I am working on a backend service which polls S3 bucket periodically using spring aws integration and processes the polled object from S3. Below is the implementation for it
#Configuration
#EnableIntegration
#IntegrationComponentScan
#EnableAsync
public class S3PollerConfiguration {
//private static final Logger log = (Logger) LoggerFactory.getLogger(S3PollerConfiguration.class);
#Value("${amazonProperties.bucketName}")
private String bucketName;
#Bean
#InboundChannelAdapter(value = "s3FilesChannel", poller = #Poller(fixedDelay = "5"))
public MessageSource<InputStream> s3InboundStreamingMessageSource() {
S3StreamingMessageSource messageSource = new S3StreamingMessageSource(template());
messageSource.setRemoteDirectory(bucketName);
return messageSource;
}
#Bean
public S3RemoteFileTemplate template() {
return new S3RemoteFileTemplate(new S3SessionFactory(thumbnailGeneratorService.getImagesS3Client()));
}
#Bean
public PollableChannel s3FilesChannel() {
return new QueueChannel();
}
#Bean
IntegrationFlow fileReadingFlow() throws IOException {
return IntegrationFlows
.from(s3InboundStreamingMessageSource(),
e -> e.poller(p -> p.fixedDelay(10, TimeUnit.SECONDS)))
.handle(Message.class, (payload, header) -> processS3Object(payload.getHeaders(), payload.getPayload()))
.get();
}
}
I am getting the messages from S3 on object upload and I am able to process it using the input stream received as part of message payload. But the problem I face here is that I get 'Time out waiting for connection from pool' exception after receiving few messages
2019-01-06 02:19:06.156 ERROR 11322 --- [ask-scheduler-5] o.s.integration.handler.LoggingHandler : org.springframework.messaging.MessagingException: Failed to execute on session; nested exception is com.amazonaws.SdkClientException: Unable to execute HTTP request: Timeout waiting for connection from pool
at org.springframework.integration.file.remote.RemoteFileTemplate.execute(RemoteFileTemplate.java:445)
at org.springframework.integration.file.remote.RemoteFileTemplate.list(RemoteFileTemplate.java:405)
at org.springframework.integration.file.remote.AbstractRemoteFileStreamingMessageSource.listFiles(AbstractRemoteFileStreamingMessageSource.java:194)
at org.springframework.integration.file.remote.AbstractRemoteFileStreamingMessageSource.poll(AbstractRemoteFileStreamingMessageSource.java:180)
at org.springframework.integration.aws.inbound.S3StreamingMessageSource.poll(S3StreamingMessageSource.java:70)
at org.springframework.integration.file.remote.AbstractRemoteFileStreamingMessageSource.doReceive(AbstractRemoteFileStreamingMessageSource.java:153)
at org.springframework.integration.endpoint.AbstractMessageSource.receive(AbstractMessageSource.java:155)
at org.springframework.integration.endpoint.SourcePollingChannelAdapter.receiveMessage(SourcePollingChannelAdapter.java:236)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.doPoll(AbstractPollingEndpoint.java:250)
I know that the issue is related to not closing the opened S3Object like stated here https://github.com/aws/aws-sdk-java/issues/1405 so I have implemented closing the input stream of the S3Object received as part of message payload. But that does not solve the issue and I keep getting the exceptions. Can someone help me to fix this issue ?
Your problem that you still mix Messaging Annotations declarations with Java DSL in your configuration.
Looks like in the fileReadingFlow you close those InputStreams in your code processS3Object() method, but you do nothing with InputStreams produced by the #InboundChannelAdapter(value = "s3FilesChannel", poller = #Poller(fixedDelay = "5")).
Why do you have it in fist place at all? What makes you to keep that code if you don't use it?
This S3StreamingMessageSource is polled all the time twice: by the #InboundChannelAdapter and IntegrationFlows.from().
You just have to remove that #InboundChannelAdapter from the S3StreamingMessageSource bean definition and that's all.
Please, read more Reference Manual to determine the reason of such an annotation and how you don't need it when you use Java DSL:
https://docs.spring.io/spring-integration/reference/html/configuration.html#_using_the_literal_inboundchanneladapter_literal_annotation
https://docs.spring.io/spring-integration/reference/html/java-dsl.html#java-dsl-inbound-adapters

Resume transfer of files after connection reset FTP

I am building an application using Spring Integration which is used to send files from one FTP server (source) to another FTP server (target). I first send files from source to the local directory using the inbound adapter and then send files from the local directory to the target using the outbound adapter.
My code seems to be working fine and I am able to achieve my goal but my problem is when the connection is reset to the target FTP server during the transfer of files, then the transfer of files don't continue after the connection starts working.
I used the Java configurations using inbound and outbound adapters. Can anyone please tell me if it is possible to resume my transfer of files somehow after the connection reset?
P.S: I am a beginner at Spring, so correct me if I have done something wrong here. Thanks
AppConfig.java:
#Configuration
#Component
public class FileTransferServiceConfig {
#Autowired
private ConfigurationService configurationService;
public static final String FILE_POLLING_DURATION = "5000";
#Bean
public SessionFactory<FTPFile> sourceFtpSessionFactory() {
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost(configurationService.getSourceHostName());
sf.setPort(Integer.parseInt(configurationService.getSourcePort()));
sf.setUsername(configurationService.getSourceUsername());
sf.setPassword(configurationService.getSourcePassword());
return new CachingSessionFactory<FTPFile>(sf);
}
#Bean
public SessionFactory<FTPFile> targetFtpSessionFactory() {
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost(configurationService.getTargetHostName());
sf.setPort(Integer.parseInt(configurationService.getTargetPort()));
sf.setUsername(configurationService.getTargetUsername());
sf.setPassword(configurationService.getTargetPassword());
return new CachingSessionFactory<FTPFile>(sf);
}
#MessagingGateway
public interface MyGateway {
#Gateway(requestChannel = "toFtpChannel")
void sendToFtp(Message message);
}
#Bean
public FtpInboundFileSynchronizer ftpInboundFileSynchronizer() {
FtpInboundFileSynchronizer fileSynchronizer = new FtpInboundFileSynchronizer(sourceFtpSessionFactory());
fileSynchronizer.setDeleteRemoteFiles(false);
fileSynchronizer.setRemoteDirectory(configurationService.getSourceDirectory());
fileSynchronizer.setFilter(new FtpSimplePatternFileListFilter(
configurationService.getFileMask()));
return fileSynchronizer;
}
#Bean
#InboundChannelAdapter(channel = "ftpChannel",
poller = #Poller(fixedDelay = FILE_POLLING_DURATION ))
public MessageSource<File> ftpMessageSource() {
FtpInboundFileSynchronizingMessageSource source =
new FtpInboundFileSynchronizingMessageSource(ftpInboundFileSynchronizer());
source.setLocalDirectory(new File(configurationService.getLocalDirectory()));
source.setAutoCreateLocalDirectory(true);
source.setLocalFilter(new AcceptOnceFileListFilter<File>());
return source;
}
#Bean
#ServiceActivator(inputChannel = "ftpChannel")
public MessageHandler targetHandler() {
FtpMessageHandler handler = new FtpMessageHandler(targetFtpSessionFactory());
handler.setRemoteDirectoryExpression(new LiteralExpression(
configurationService.getTargetDirectory()));
return handler;
}
}
Application.java:
#SpringBootApplication
public class Application {
public static ConfigurableApplicationContext context;
public static void main(String[] args) {
context = new SpringApplicationBuilder(Application.class)
.web(false)
.run(args);
}
#Bean
#ServiceActivator(inputChannel = "ftpChannel")
public MessageHandler sourceHandler() {
return new MessageHandler() {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
Object payload = message.getPayload();
System.out.println("Payload: " + payload);
if (payload instanceof File) {
File file = (File) payload;
System.out.println("Trying to send " + file.getName() + " to target");
}
MyGateway gateway = context.getBean(MyGateway.class);
gateway.sendToFtp(message);
}
};
}
}
First of all it isn't clear what is that sourceHandler for, but you really should be sure that it is subscribed (or targetHandler) to proper channel.
I somehow believe that in your target code the targetHandler is really subscribed to the toFtpChannel.
Anyway that isn't related.
I think the problem here is exactly with the AcceptOnceFileListFilter and error. So, filter work first during directory scan and load all the local files to the in-memory queue for performance reason. Then all of them are sent to the channel for processing. When we reach the targetHandler and got an exception, we just silently got away to the global errorChannel loosing the fact that file hasn't been transferred. And this happens with all the remaining files in memory. I think the transfer is resumed anyway but it is going work already only for new files in the remote directory.
I suggest you to add ExpressionEvaluatingRequestHandlerAdvice to the targetHandler definition (#ServiceActivator(adviceChain)) and in case of error call the AcceptOnceFileListFilter.remove(File):
/**
* Remove the specified file from the filter so it will pass on the next attempt.
* #param f the element to remove.
* #return true if the file was removed as a result of this call.
*/
boolean remove(F f);
This way you remove the failed files from the filter and it will be picked up on the next poll task. You have to make AcceptOnceFileListFilter to be able to get an access to it from the onFailureExpression. The file is the payload of request message.
EDIT
The sample for the ExpressionEvaluatingRequestHandlerAdvice:
#Bean
public Advice expressionAdvice() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setOnFailureExpressionString("#acceptOnceFileListFilter.remove(payload)");
advice.setTrapException(true);
return advice;
}
...
#ServiceActivator(inputChannel = "ftpChannel", adviceChain = "expressionAdvice")
Everything rest you can get from their JavaDocs.

Categories