Reject rabbitMQ message with pre-message TTL - java

I have added ttl (30seconds) in the queue during the auto-creation of the queue in the RabbitConfig class. I would like to extended/reduced ttl(10seconds) for certain message and reject them, when necessary, inside the #RabbitListener method (listenToMain()).
I have managed to figure out that I am able to include the ttl(10seconds) in every message, which I have done it in the MyService.call(), but when the message goes back to the main-queue, the ttl will go back to the ttl (30seconds) I have set in the queue. How can I code it in a way that, the ttl will always be indicated in the MyService.call()?
Here are my codes:
RabbitConfig.java
#Configuration
public class RabbitConfig{
/*Setup all the RabbitTemplate etc code*/
//Auto creation of queue
String mainQueue = "main-queue";
String subQueue = "sub-queue";
String mainExchange = "main-exchange";
String subExchange = "sub-exchange";
String mainRouting = "mainroute";
String subRouting = "subroute";
#Bean
Queue mainQueue(){
Map<String, Object> args = new HashMap();
args.put("x-dead-letter-exchange", subExchange);
args.put("x-dead-letter-routing-key", subRouting);
return ....to build this queue with the args values.....
}
#Bean
Queue subQueue(){
Map<String, Object> args = new HashMap();
args.put("x-dead-letter-exchange", mainExchange);
args.put("x-dead-letter-routing-key", mainRouting);
args.put("x-message-ttl", 30000);
return ....to build this queue with the args values.....
}
#Bean
Exchange mainExchange(){
return ......to build the main exchange......
}
#Bean
Exchange subExchange(){
return ......to build the sub exchange......
}
#Bean
Binding mainBinding(){
return ...tobind the mainExchange() and mainQueue().......
}
#Bean
Binding subBinding(){
return ...tobind the subExchange() and subQueue().......
}
}
MyMessagePostProcessor.java
public class MyMessagePostProcessor implements MessagePostProcessor {
private final Integer ttl;
public MyMessagePostProcessor(final Integer ttl) {
this.ttl = ttl;
}
#Override
public Message postProcessMessage(final Message message) throws AmqpException {
message.getMessageProperties().getHeaders().put("expiration", ttl.toString());
return message;
}
}
MyService.java
#Service
public class MyService{
public void call(){
final String message = "message";
final MessagePostProcessor messagePostProcessor = new MyMessagePostProcessor(10000);
rabbitTemplate.convertAndSend("sub-exchange", "subroute", message, messagePostProcessor);
}
}
RabbitConsume.java
#Service
public class RabbitConsume{
#RabbitListener(queue="main-queue")
public void listenToMain(String msg, Channel channel,
#Header(AmqpHeaders.DELIVERY_TAG) long tag
){
channel.basicReject(tag, false)
}
}

Related

Trigger one Kafka consumer by using values of another consumer In Spring Kafka

I have one scheduler which produces one event. My consumer consumes this event. The payload of this event is a json with below fields:
private String topic;
private String partition;
private String filterKey;
private long CustId;
Now I need to trigger one more consumer which will take all this information which I get a response from first consumer.
#KafkaListener(topics = "<**topic-name-from-first-consumer-response**>", groupId = "group" containerFactory = "kafkaListenerFactory")
public void consumeJson(List<User> data, Acknowledgment acknowledgment,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) List<Integer> partitions,
#Header(KafkaHeaders.OFFSET) List<Long> offsets) {
// consumer code goes here...}
I need to create some dynamic variable which I can pass in place of topic name.
similarly, I am using the filtering in the configuration file and I need to pass key dynamically in the configuration.
factory.setRecordFilterStrategy(new RecordFilterStrategy<String, Object>() {
#Override
public boolean filter(ConsumerRecord<String, Object> consumerRecord) {
if(consumerRecord.key().equals("**Key will go here**")) {
return false;
}
else {
return true;
}
}
});
How can we dynamically inject these values from the response of first consumer and trigger the second consumer. Both the consumers are in same application
You cannot do that with an annotated listener, the configuration is only used during initialization; you would need to create a listener container yourself (using the ConcurrentKafkaListenerContainerFactory) to dynamically create a listener.
EDIT
Here's an example.
#SpringBootApplication
public class So69134055Application {
public static void main(String[] args) {
SpringApplication.run(So69134055Application.class, args);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so69134055").partitions(1).replicas(1).build();
}
}
#Component
class Listener {
private static final Logger log = LoggerFactory.getLogger(Listener.class);
private static final Method otherListen;
static {
try {
otherListen = Listener.class.getDeclaredMethod("otherListen", List.class);
}
catch (NoSuchMethodException | SecurityException ex) {
throw new IllegalStateException(ex);
}
}
private final ConcurrentKafkaListenerContainerFactory<String, String> factory;
private final MessageHandlerMethodFactory methodFactory;
private final KafkaAdmin admin;
private final KafkaTemplate<String, String> template;
public Listener(ConcurrentKafkaListenerContainerFactory<String, String> factory, KafkaAdmin admin,
KafkaTemplate<String, String> template, KafkaListenerAnnotationBeanPostProcessor<?, ?> bpp) {
this.factory = factory;
this.admin = admin;
this.template = template;
this.methodFactory = bpp.getMessageHandlerMethodFactory();
}
#KafkaListener(id = "so69134055", topics = "so69134055")
public void listen(String topicName) {
try (AdminClient client = AdminClient.create(this.admin.getConfigurationProperties())) {
NewTopic topic = TopicBuilder.name(topicName).build();
client.createTopics(List.of(topic)).all().get(10, TimeUnit.SECONDS);
}
catch (Exception e) {
log.error("Failed to create topic", e);
}
ConcurrentMessageListenerContainer<String, String> container =
this.factory.createContainer(new TopicPartitionOffset(topicName, 0));
BatchMessagingMessageListenerAdapter<String, String> adapter =
new BatchMessagingMessageListenerAdapter<>(this, otherListen);
adapter.setHandlerMethod(new HandlerAdapter(
this.methodFactory.createInvocableHandlerMethod(this, otherListen)));
FilteringBatchMessageListenerAdapter<String, String> filtered =
new FilteringBatchMessageListenerAdapter<>(adapter, record -> !record.key().equals("foo"));
container.getContainerProperties().setMessageListener(filtered);
container.getContainerProperties().setGroupId("group.for." + topicName);
container.setBeanName(topicName + ".container");
container.start();
IntStream.range(0, 10).forEach(i -> this.template.send(topicName, 0, i % 2 == 0 ? "foo" : "bar", "test" + i));
}
void otherListen(List<String> others) {
log.info("Others: {}", others);
}
}
spring.kafka.consumer.auto-offset-reset=earliest
Output - showing that the filter was applied to the records with bar in the key.
Others: [test0, test2, test4, test6, test8]

Spring Kafka: Close the container and read the messages from specific offset with ConcurrentKafkaListenerContainerFactory

In my spring kafka application, I want to trigger the consumer at run time according to input of some scheduler. Scheduler will tell the listener from which topic it can start consuming messages. There is springboot application with custom ConcurrentKafkaListenerContainerFactory class. I need to perform three tasks:
close the container, After successfully reading all the messages available on topic.
It will store the current offset in DB or file system.
Next time when consumer up again, the stored offset can be used to process the records instead of default offset managed by Kafka. So that in future we can change the offset value in DB and get get desired reports.
I know how to handle all these with #KafkaListener but not sure how to hook with ConcurrentKafkaListenerContainerFactory. The current code is listed below:
#SpringBootApplication
public class KafkaApp{
public static void main(String[] args) {
SpringApplication.run(KafkaApp.class, args);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("testTopic").partitions(1).replicas(1).build();
}
}
#Component
class Listener {
private static final Logger log = LoggerFactory.getLogger(Listener.class);
private static final Method otherListen;
static {
try {
otherListen = Listener.class.getDeclaredMethod("otherListen", List.class);
}
catch (NoSuchMethodException | SecurityException ex) {
throw new IllegalStateException(ex);
}
}
private final ConcurrentKafkaListenerContainerFactory<String, String> factory;
private final MessageHandlerMethodFactory methodFactory;
private final KafkaAdmin admin;
private final KafkaTemplate<String, String> template;
public Listener(ConcurrentKafkaListenerContainerFactory<String, String> factory, KafkaAdmin admin,
KafkaTemplate<String, String> template, KafkaListenerAnnotationBeanPostProcessor<?, ?> bpp) {
this.factory = factory;
this.admin = admin;
this.template = template;
this.methodFactory = bpp.getMessageHandlerMethodFactory();
}
#KafkaListener(id = "myId", topics = "testTopic")
public void listen(String topicName) {
try (AdminClient client = AdminClient.create(this.admin.getConfigurationProperties())) {
NewTopic topic = TopicBuilder.name(topicName).build();
client.createTopics(List.of(topic)).all().get(10, TimeUnit.SECONDS);
}
catch (Exception e) {
log.error("Failed to create topic", e);
}
ConcurrentMessageListenerContainer<String, String> container =
this.factory.createContainer(new TopicPartitionOffset(topicName, 0));
BatchMessagingMessageListenerAdapter<String, String> adapter =
new BatchMessagingMessageListenerAdapter<>(this, otherListen);
adapter.setHandlerMethod(new HandlerAdapter(
this.methodFactory.createInvocableHandlerMethod(this, otherListen)));
FilteringBatchMessageListenerAdapter<String, String> filtered =
new FilteringBatchMessageListenerAdapter<>(adapter, record -> !record.key().equals("foo"));
container.getContainerProperties().setMessageListener(filtered);
container.getContainerProperties().setGroupId("group.for." + topicName);
container.setBeanName(topicName + ".container");
container.start();
IntStream.range(0, 10).forEach(i -> this.template.send(topicName, 0, i % 2 == 0 ? "foo" : "bar", "test" + i));
}
void otherListen(List<String> others) {
log.info("Others: {}", others);
}
}
EDIT
#SpringBootApplication
public class KafkaApp{
public static void main(String[] args) {
SpringApplication.run(KafkaApp.class, args);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("testTopic").partitions(1).replicas(1).build();
}
}
#Component
class Listener {
private static final Logger log = LoggerFactory.getLogger(Listener.class);
private static final Method otherListen;
static {
try {
otherListen = Listener.class.getDeclaredMethod("otherListen", List.class);
}
catch (NoSuchMethodException | SecurityException ex) {
throw new IllegalStateException(ex);
}
}
private final ConcurrentKafkaListenerContainerFactory<String, String> factory;
private final MessageHandlerMethodFactory methodFactory;
private final KafkaAdmin admin;
private final KafkaTemplate<String, String> template;
public Listener(ConcurrentKafkaListenerContainerFactory<String, String> factory, KafkaAdmin admin,
KafkaTemplate<String, String> template, KafkaListenerAnnotationBeanPostProcessor<?, ?> bpp) {
this.factory = factory;
this.admin = admin;
this.template = template;
this.methodFactory = bpp.getMessageHandlerMethodFactory();
}
#KafkaListener(id = "myId", topics = "testTopic")
public void listen(String topicName) {
try (AdminClient client = AdminClient.create(this.admin.getConfigurationProperties())) {
NewTopic topic = TopicBuilder.name(topicName).build();
client.createTopics(List.of(topic)).all().get(10, TimeUnit.SECONDS);
}
catch (Exception e) {
log.error("Failed to create topic", e);
}
ConcurrentMessageListenerContainer<String, String> container =
this.factory.createContainer(new TopicPartitionOffset(topicName, 0));
BatchMessagingMessageListenerAdapter<String, String> adapter =
new BatchMessagingMessageListenerAdapter<>(this, otherListen);
adapter.setHandlerMethod(new HandlerAdapter(
this.methodFactory.createInvocableHandlerMethod(this, otherListen)));
FilteringBatchMessageListenerAdapter<String, String> filtered =
new FilteringBatchMessageListenerAdapter<>(adapter, record -> !record.key().equals("foo"));
container.getContainerProperties().setMessageListener(filtered);
container.getContainerProperties().setGroupId("group.for." + topicName);
container.setBeanName(topicName + ".container");
container.getContainerProperties().setIdleEventInterval(3000L);
container.start();
IntStream.range(0, 10).forEach(i -> this.template.send(topicName, 0, i % 2 == 0 ? "foo" : "bar", "test" + i));
}
void otherListen(List<String> others) {
log.info("Others: {}", others);
}
#EventListener
public void eventHandler(ListenerContainerIdleEvent event) {
logger.info("No messages received for " + event.getIdleTime() + " milliseconds");
}
}
You can receive ListenerContainerIdleEvents when there are no messages left to process; you can use this event to stop the container; you should perform the stop() on a different thread (not the one that publishes the event).
See How to check if Kafka is empty using Spring Kafka?
You can get the partition/offset in several ways.
void otherListen<List<ConsumerRecord<..., ...>>)
or
void otherListen(List<String> others,
#Header(KafkaHeaders.RECEIVED_PARTITION) List<Integer> partitions,
#Header(KafkaHeaders.OFFSET) List<Long> offsets)
You can specify the starting offset in the
new TopicPartitionOffset(topicName, 0), startOffset);
when creating the container.
EDIT
To stop the container when it is idle, set the idleEventInterval and add an #EventListener method and stop the container.
TaskExecutor exec = new SimpleAsyncTaskExecutor();
#EventListener
void idle(ListenerContainerIdleEvent event) {
log...
this.exec.execute(() -> event.getContainer(ConcurrentMessageListenerContainer.class).stop());
}
If you add concurrency to your containers, you would need for each child container to go idle before stopping the parent container.
EDIT2
I just added it to the code I wrote for the answer to your other question and it works exactly as expected.
#KafkaListener(id = "so69134055", topics = "so69134055")
public void listen(String topicName) {
try (AdminClient client = AdminClient.create(this.admin.getConfigurationProperties())) {
NewTopic topic = TopicBuilder.name(topicName).build();
client.createTopics(List.of(topic)).all().get(10, TimeUnit.SECONDS);
}
catch (Exception e) {
log.error("Failed to create topic", e);
}
ConcurrentMessageListenerContainer<String, String> container =
this.factory.createContainer(new TopicPartitionOffset(topicName, 0));
BatchMessagingMessageListenerAdapter<String, String> adapter =
new BatchMessagingMessageListenerAdapter<>(this, otherListen);
adapter.setHandlerMethod(new HandlerAdapter(
this.methodFactory.createInvocableHandlerMethod(this, otherListen)));
FilteringBatchMessageListenerAdapter<String, String> filtered =
new FilteringBatchMessageListenerAdapter<>(adapter, record -> !record.key().equals("foo"));
container.getContainerProperties().setMessageListener(filtered);
container.getContainerProperties().setGroupId("group.for." + topicName);
container.getContainerProperties().setIdleEventInterval(3000L);
container.setBeanName(topicName + ".container");
container.start();
IntStream.range(0, 10).forEach(i -> this.template.send(topicName, 0, i % 2 == 0 ? "foo" : "bar", "test" + i));
}
void otherListen(List<String> others) {
log.info("Others: {}", others);
}
TaskExecutor exec = new SimpleAsyncTaskExecutor();
#EventListener
public void idle(ListenerContainerIdleEvent event) {
log.info(event.toString());
this.exec.execute(() -> {
ConcurrentMessageListenerContainer container = event.getContainer(ConcurrentMessageListenerContainer.class);
log.info("stopping container: " + container.getBeanName());
container.stop();
});
}
[foo.container-0-C-1] Others: [test0, test2, test4, test6, test8]
[foo.container-0-C-1] ListenerContainerIdleEvent [idleTime=5.007s, listenerId=foo.container-0, container=KafkaMessageListenerContainer [id=foo.container-0, clientIndex=-0, topicPartitions=[foo-0]], paused=false, topicPartitions=[foo-0]]
[SimpleAsyncTaskExecutor-1] stopping container: foo.container
[foo.container-0-C-1] [Consumer clientId=consumer-group.for.foo-2, groupId=group.for.foo] Unsubscribed all topics or patterns and assigned partitions
[foo.container-0-C-1] Metrics scheduler closed
[foo.container-0-C-1] Closing reporter org.apache.kafka.common.metrics.JmxReporter
[foo.container-0-C-1] Metrics reporters closed
[foo.container-0-C-1] App info kafka.consumer for consumer-group.for.foo-2 unregistered
[foo.container-0-C-1] group.for.foo: Consumer stopped

ActiveMQ dequeue not consumed messages

I'm pretty new to ActiveMQ and messaging brokers. I am facing a problem and I would like to get some tips and informations from you.
So I have multiple queues, there is only one listener for every queue, everything works fine, until the connection is lost. When the connection is established, the connection between the queue and the subscriber is established as well, but the messages that weren't consumed are no longer sent after re-connection.
How can I re-send the messages that weren't consumed yet?
I am using Spring and I was looking for a configuration to make it work, but didn't get to find any and afterwards I was looking for a work-around or something, but still not able to find a solution.
Any information, solution, link sent related to my problem would be very helpful!
here is my JMSConfig class:
#Configuration
class JMSConfig {
private final String BROKER_URL = ConfigurationManager.getConfig().getString("broker.url");
private final String BROKER_USERNAME = ConfigurationManager.getConfig().getString("broker.username");
private final String BROKER_PASSWORD = ConfigurationManager.getConfig().getString("broker.password");
private final long SENDER_RECEIVE_TIMEOUT = ConfigurationManager.getConfig().getLong("sender.receive.timeout");
private final long SENDING_TIME_TO_LIVE = ConfigurationManager.getConfig().getLong("sending.time.to.live");
private final String RECEIVER_LISTENER_CONCURRENCY = ConfigurationManager.getConfig().getString("receiver.listener.concurrency");
private final boolean USE_POOLED_CONNECTION_FACTORY = ConfigurationManager.getConfig().getBoolean("use.pooled.connection.factory");
private final int POOL_MAX_CONNECTIONS = ConfigurationManager.getConfig().getInt("pool.max.connections");
private final boolean USE_TOPICS = ConfigurationManager.getConfig().getBoolean("use.topics");
private final static boolean OVERRIDE_REDELIVERY = false;
static {
ConfigurationManager.registerConfiguration("messaging", ConfigurationPriority.LOWEST);
}
#Bean
public ConnectionFactory connectionFactory() {
LoggerUtilities.logger().info("Registering connectionFactory");
// Setup the JMS connection
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory();
connectionFactory.setBrokerURL(BROKER_URL);
connectionFactory.setPassword(BROKER_USERNAME);
connectionFactory.setUserName(BROKER_PASSWORD);
// Does not work with spring.activemq.packages.trust-all=true so we need to setup manually
// Avoiding "This class is not trusted to be serialized as ObjectMessage payload"
connectionFactory.setTrustAllPackages(true);
if (OVERRIDE_REDELIVERY) {
overrideRedelivery(connectionFactory);
}
if (USE_POOLED_CONNECTION_FACTORY) {
// Using PooledConnectionFactory will ensure that on the ActiveMQ server does not get "Exceeded the maximum number of allowed client connections."
// This setup is also found in /apache-activemq-5.15.6/conf/activemq.xml > maximumConnections
PooledConnectionFactory pooledConnectionFactory = new PooledConnectionFactory(connectionFactory);
pooledConnectionFactory.setMaxConnections(POOL_MAX_CONNECTIONS);
return pooledConnectionFactory;
}
return connectionFactory;
}
#Bean
public JmsTemplate jmsTemplate() {
LoggerUtilities.logger().info("Registering jmsTemplate");
JmsTemplate template = new JmsTemplate();
template.setConnectionFactory(connectionFactory());
// Qos needs to be enabled when getDeliveryMode(), getPriority(), getTimeToLive() is needed
template.setExplicitQosEnabled(true);
template.setDeliveryPersistent(false);
template.setSessionAcknowledgeMode(Session.AUTO_ACKNOWLEDGE);
template.setTimeToLive(SENDING_TIME_TO_LIVE);
template.setReceiveTimeout(SENDER_RECEIVE_TIMEOUT);
if (USE_TOPICS) {
template.setPubSubDomain(true);
}
return template;
}
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
LoggerUtilities.logger().info("Registering jmsListenerContainerFactory");
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
if (USE_TOPICS) {
factory.setPubSubDomain(true);
}
factory.setConnectionFactory(connectionFactory());
factory.setConcurrency(RECEIVER_LISTENER_CONCURRENCY);
factory.setErrorHandler(throwable -> {
String exceptionMessage = "[RECEIVER] Error on the listener = " + throwable.getMessage();
LoggerUtilities.logger().warn(exceptionMessage);
});
return factory;
}
#Bean
public MessageConverter jacksonJmsMessageConverter() {
LoggerUtilities.logger().info("Registering jacksonJmsMessageConverter");
// Serializing message content to json
MappingJackson2MessageConverter converter = new MappingJackson2MessageConverter();
converter.setTargetType(MessageType.TEXT);
converter.setTypeIdPropertyName("_type");
return converter;
}
private void overrideRedelivery(ActiveMQConnectionFactory connectionFactory) {
LoggerUtilities.logger().info("Registering overrideRedelivery");
// Overriding redelivery policy
RedeliveryPolicy policy = connectionFactory.getRedeliveryPolicy();
policy.setInitialRedeliveryDelay(0);
policy.setBackOffMultiplier(0);
policy.setUseExponentialBackOff(false);
policy.setMaximumRedeliveries(RedeliveryPolicy.NO_MAXIMUM_REDELIVERIES);
}
}
Here are the properties:
spring.jms.listener.acknowledge-mode= auto
spring.jms.listener.auto-startup= true
spring.jms.listener.concurrency= 3
spring.jms.listener.max-concurrency= 9
spring.jms.listener.pub-sub-domain= false
spring.jms.listener.receive-timeout=${BROKER_TIMEOUT}
spring.jms.template.delivery-mode= non_persistent
spring.jms.template.priority= 5
spring.jms.template.qos-enabled= true
spring.jms.template.receive-timeout= ${BROKER_TIMEOUT}
spring.jms.template.time-to-live= ${BROKER_TIMEOUT}

Spring Batch Integration Remote Chunking error - Message contained wrong job instance id [25] should have been [24]

I'm running into this bug (more info here) which appears to mean that for multi-threaded batches using remote chunking you can't use a common response channel. I'm not exactly sure how to proceed to get this working. Surely there's a way to get this working, because without it I can't see much benefit to remote chunking.
Here's my DSL config that creates a JobRequest:
#Bean
IntegrationFlow newPollingJobsAdapter(JobLaunchingGateway jobLaunchingGateway) {
// Start by polling the DB for new PollingJobs according to the polling rate
return IntegrationFlows.from(jdbcPollingChannelAdapter(),
c -> c.poller(Pollers.fixedRate(10000)
// Do the polling on one of 10 threads.
.taskExecutor(Executors.newFixedThreadPool(10))
// pull out up to 100 new ids for each poll.
.maxMessagesPerPoll(100)))
.log(LoggingHandler.Level.WARN)
// The polling adapter above returns a list of ids. Split them out into
// individual ids
.split()
// Now push each one onto a separate thread for batch processing.
.channel(MessageChannels.executor(Executors.newFixedThreadPool(10)))
.log(LoggingHandler.Level.WARN)
// Transform each one into a JobLaunchRequest
.<Long, JobLaunchRequest>transform(id -> {
logger.warn("Creating job for ID {}", id);
JobParametersBuilder builder = new JobParametersBuilder()
.addLong("polling-job-id", id, true);
return new JobLaunchRequest(job, builder.toJobParameters());
})
.handle(jobLaunchingGateway)
// TODO: Notify somebody? No idea yet
.<JobExecution>handle(exec -> System.out.println("GOT EXECUTION: " + exec))
.get();
}
Nothing in here is particularly special, no odd configs that I'm aware of.
The job itself is pretty straight-forward, too:
/**
* This is the definition of the entire batch process that runs polling.
* #return
*/
#Bean
Job pollingJobJob() {
return jobBuilderFactory.get("pollingJobJob")
.incrementer(new RunIdIncrementer())
// Ship it down to the slaves for actual processing
.start(remoteChunkingStep())
// Now mark it as complete
.next(markCompleteStep())
.build();
}
/**
* Sends the job to a remote slave via an ActiveMQ-backed JMS queue.
*/
#Bean
TaskletStep remoteChunkingStep() {
return stepBuilderFactory.get("polling-job-step-remote-chunking")
.<Long, String>chunk(20)
.reader(runningPollingJobItemReader)
.processor(toJsonProcessor())
.writer(chunkWriter)
.build();
}
/**
* This step just marks the PollerJob as Complete.
*/
#Bean
Step markCompleteStep() {
return stepBuilderFactory.get("polling-job-step-mark-complete")
// We want each PollerJob instance to be a separate job in batch, and the
// reader is using the id passed in via job params to grab the one we want,
// so we don't need a large chunk size. One at a time is fine.
.<Long, Long>chunk(1)
.reader(runningPollingJobItemReader)
.processor(new PassThroughItemProcessor<Long>())
.writer(this.completeStatusWriter)
.build();
}
Here's the chunk writer config:
/**
* This is part of the bridge between the spring-batch and spring-integration. Nothing special or weird is going
* on, so see the RemoteChunkHandlerFactoryBean for a description.
*/
#Bean
RemoteChunkHandlerFactoryBean<PollerJob> remoteChunkHandlerFactoryBean() {
RemoteChunkHandlerFactoryBean<PollerJob> factory = new RemoteChunkHandlerFactoryBean<>();
factory.setChunkWriter(chunkWriter);
factory.setStep(remoteChunkingStep());
return factory;
}
/**
* This is the writer that will actually send the chunk to the slaves. Note that it also configures the
* internal channel on which replies are expected.
*/
#Bean
#StepScope
ChunkMessageChannelItemWriter<String> chunkWriter() {
ChunkMessageChannelItemWriter<String> writer = new ChunkMessageChannelItemWriter<>();
writer.setMessagingOperations(batchMessagingTemplate());
writer.setReplyChannel(batchResponseChannel());
writer.setThrottleLimit(1000);
return writer;
}
The problem seems to be that last section sets up the ChunkMessageChannelItemWriter such that the replyChannel is the same one used by all of the writers, despite each writer being step-scoped. It would seem that I need to add a replyChannel header to one of the messages, but I'm not sure where in the chain to do that or how to process that (if I need to at all?).
Also, this is being sent to the slaves via JMS/ActiveMQ and I'd like to avoid having just a stupid number of nearly-identical queues on ActiveMQ just to support this.
What are my options?
Given that you are using a shared JMS infrastructure, you will need a router to get the responses back to the correct chunk writer.
If you use prototype scope on the batchResponseChannel() #Bean; you'll get a unique channel for each writer.
I don't have time to figure out how to set up a chunked batch job so the following simulates your environment (non-singleton bean that needs a unique reply channel for each instance). Hopefully it's self-explanatory...
#SpringBootApplication
public class So44806067Application {
public static void main(String[] args) {
ConfigurableApplicationContext context = SpringApplication.run(So44806067Application.class, args);
SomeNonSingletonNeedingDistinctRequestAndReplyChannels chunker1 = context
.getBean(SomeNonSingletonNeedingDistinctRequestAndReplyChannels.class);
SomeNonSingletonNeedingDistinctRequestAndReplyChannels chunker2 = context
.getBean(SomeNonSingletonNeedingDistinctRequestAndReplyChannels.class);
if (chunker1.equals(chunker2)) {
throw new IllegalStateException("Expected different instances");
}
chunker1.sendSome();
chunker2.sendSome();
ChunkResponse results = chunker1.getResults();
if (results == null) {
throw new IllegalStateException("No results1");
}
if (results.getJobId() != 1L) {
throw new IllegalStateException("Incorrect routing1");
}
results = chunker2.getResults();
if (results == null) {
throw new IllegalStateException("No results2");
}
if (results.getJobId() != 2L) {
throw new IllegalStateException("Incorrect routing2");
}
context.close();
}
#Bean
public Map<Long, PollableChannel> registry() {
// TODO: should clean up entry for jobId when job completes.
return new ConcurrentHashMap<>();
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public SomeNonSingletonNeedingDistinctRequestAndReplyChannels chunker() {
MessagingTemplate template = template();
final PollableChannel replyChannel = replyChannel();
SomeNonSingletonNeedingDistinctRequestAndReplyChannels bean =
new SomeNonSingletonNeedingDistinctRequestAndReplyChannels(template, replyChannel);
AbstractSubscribableChannel requestChannel = (AbstractSubscribableChannel) template.getDefaultDestination();
requestChannel.addInterceptor(new ChannelInterceptorAdapter() {
#Override
public Message<?> preSend(Message<?> message, MessageChannel channel) {
registry().putIfAbsent(((ChunkRequest<?>) message.getPayload()).getJobId(), replyChannel);
return message;
}
});
BridgeHandler bridge = bridge();
requestChannel.subscribe(bridge);
return bean;
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public MessagingTemplate template() {
MessagingTemplate messagingTemplate = new MessagingTemplate();
messagingTemplate.setDefaultChannel(requestChannel());
return messagingTemplate;
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public DirectChannel requestChannel() {
return new DirectChannel();
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public PollableChannel replyChannel() {
return new QueueChannel();
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public BridgeHandler bridge() {
BridgeHandler bridgeHandler = new BridgeHandler();
bridgeHandler.setOutputChannel(outboundChannel());
return bridgeHandler;
}
#Bean
public DirectChannel outboundChannel() {
return new DirectChannel();
}
#Bean
public DirectChannel masterReplyChannel() {
return new DirectChannel();
}
#ServiceActivator(inputChannel = "outboundChannel")
public void simulateJmsChannelAdapterPair(ChunkRequest<?> request) {
masterReplyChannel()
.send(new GenericMessage<>(new ChunkResponse(request.getSequence(), request.getJobId(), null)));
}
#Router(inputChannel = "masterReplyChannel")
public MessageChannel route(ChunkResponse reply) {
// TODO: error checking - missing reply channel for jobId
return registry().get(reply.getJobId());
}
public static class SomeNonSingletonNeedingDistinctRequestAndReplyChannels {
private final static AtomicLong jobIds = new AtomicLong();
private final long jobId = jobIds.incrementAndGet();
private final MessagingTemplate template;
private final PollableChannel replyChannel;
public SomeNonSingletonNeedingDistinctRequestAndReplyChannels(MessagingTemplate template,
PollableChannel replyChannel) {
this.template = template;
this.replyChannel = replyChannel;
}
public void sendSome() {
ChunkRequest<String> cr = new ChunkRequest<>(0, Collections.singleton("foo"), this.jobId, null);
this.template.send(new GenericMessage<>(cr));
}
public ChunkResponse getResults() {
#SuppressWarnings("unchecked")
Message<ChunkResponse> received = (Message<ChunkResponse>) this.replyChannel.receive(10_000);
if (received != null) {
if (received.getPayload().getJobId().equals(this.jobId)) {
System.out.println("Got the right one");
}
else {
System.out.println(
"Got the wrong one " + received.getPayload().getJobId() + " instead of " + this.jobId);
}
return received.getPayload();
}
return null;
}
}
}

Set SNS WaitTime in Spring AWS Cloud Framework

I am using Spring AWS Cloud Framework to poll for S3 Event notifications on a queue. I am using the QueueMessagingTemplate to do this. I want to be able to set the max number of messages and wait time to poll see: http://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ReceiveMessage.html.
import com.amazonaws.services.s3.event.S3EventNotification;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.cloud.aws.messaging.core.QueueMessagingTemplate;
public class MyQueue {
private QueueMessagingTemplate queueMsgTemplate;
#Autowired
public MyQueue(QueueMessagingTemplate queueMsgTemplate) {
this.queueMsgTemplate = queueMsgTemplate;
}
#Override
public S3EventNotification poll() {
S3EventNotification s3Event = queueMsgTemplate
.receiveAndConvert("myQueueName", S3EventNotification.class);
}
}
Context
#Bean
public AWSCredentialsProviderChain awsCredentialsProviderChain() {
return new AWSCredentialsProviderChain(
new DefaultAWSCredentialsProviderChain());
}
#Bean
public ClientConfiguration clientConfiguration() {
return new ClientConfiguration();
}
#Bean
public AmazonSQS sqsClient(ClientConfiguration clientConfiguration,// UserData userData,
AWSCredentialsProviderChain credentialsProvider) {
AmazonSQSClient amazonSQSClient = new AmazonSQSClient(credentialsProvider, clientConfiguration);
amazonSQSClient.setEndpoint("http://localhost:9324");
return amazonSQSClient;
}
#Bean
public QueueMessagingTemplate queueMessagingTemplate(AmazonSQS sqsClient) {
return new QueueMessagingTemplate(sqsClient);
}
Any idea how to configure these? Thanks
The QueueMessagingTemplate.receiveAndConvert() is based on the QueueMessageChannel.receive() method where you can find a desired code:
#Override
public Message<String> receive() {
return this.receive(0);
}
#Override
public Message<String> receive(long timeout) {
ReceiveMessageResult receiveMessageResult = this.amazonSqs.receiveMessage(
new ReceiveMessageRequest(this.queueUrl).
withMaxNumberOfMessages(1).
withWaitTimeSeconds(Long.valueOf(timeout).intValue()).
withAttributeNames(ATTRIBUTE_NAMES).
withMessageAttributeNames(MESSAGE_ATTRIBUTE_NAMES));
if (receiveMessageResult.getMessages().isEmpty()) {
return null;
}
com.amazonaws.services.sqs.model.Message amazonMessage = receiveMessageResult.getMessages().get(0);
Message<String> message = createMessage(amazonMessage);
this.amazonSqs.deleteMessage(new DeleteMessageRequest(this.queueUrl, amazonMessage.getReceiptHandle()));
return message;
}
So, as you see withMaxNumberOfMessages(1) is hardcoded to 1. And that is correct because receive() can poll only one message. The withWaitTimeSeconds(Long.valueOf(timeout).intValue()) is fully equals to the provided timeout. And eh, you can't modify it in case of receiveAndConvert().
Consider to use QueueMessageChannel.receive(long timeout) and messageConverter from the QueueMessagingTemplate. Like it is done in the:
public <T> T receiveAndConvert(QueueMessageChannel destination, Class<T> targetClass) throws MessagingException {
Message<?> message = destination.receive();
if (message != null) {
return (T) getMessageConverter().fromMessage(message, targetClass);
} else {
return null;
}
}
You can reach an appropriate QueueMessageChannel via code:
String physicalResourceId = this.destinationResolver.resolveDestination(destination);
new QueueMessageChannel(this.amazonSqs, physicalResourceId);

Categories