I'm using AmazonSQS & Spring Boot (spring-cloud-aws-messaging). I've configured a message listener to receive messages from the queue with the annotation #SqsListener.
#SqsListener(value = "indexerQueue", deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS)
public void queueListener(String rawMessage) {
...
}
This is a very simple approach but I didn't find the way to make the queue name load from a configuration file because I have different environments. Any ideas on this regard?
What version of spring-cloud-aws-messaging are you using? Version 1.1 should allow you to use a placeholder as a queue name, e.g.
#SqsListener(value = "${sqs.queue.indexer}", deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS)
public void queueListener(String rawMessage) {
...
}
Then, in your application-env.properties files you can put different values. For instance in application-dev.properties:
sqs.queue.indexer=devIndexerQueue
and in application-production.properties
sqs.queue.indexer=indexerQueue
Related
I would like to know if this is possible.
#KafkaListener(topics = #Value("${kafka.topic}"), groupId = "group1")
public void listen(ConsumerRecord<String, CloudEvent> cloudEventRecord) {
}
My IDE is saying that the way of specifying the add value is wrong.
I know that ConcurrentKafkaListenerContainerFactory is used by the lister to configure properly.
I am looking for a way to map the topic name from application property yml into the listener method.
Below is my KAFKA consumer
#Service
public class Consumer {
private static final Logger LOGGER = Logger.getLogger(Consumer.class.getName());
public static Queue<ProductKafka> consumeQueue = new LinkedList<>();
#KafkaListener(topics = "#{'${spring.kafka.topics}'.split('\\\\ ')}", groupId = "#{'${spring.kafka.groupId}'}")
public void consume(ProductKafka productKafka) throws IOException {
consumeQueue.add(productKafka);
LOGGER.info(String.format("#### -> Logger Consumed message -> %s", productKafka.toString()));
System.out.printf("#### -> Consumed message -> %s", productKafka.toString());
}
}
and below is my "application.properties" file
spring.kafka.topics=Product
spring.kafka.groupId=Product-Group
My KAFKA consumer is getting started automatically.
However I want to disable KAFKA consumer being autostarted without having to make any changes to the existing code including setting autoStartup = "{xyz}" in the consumer class due to the requirement.
I am looking an existing properties which would disable KAFKA consumer being autostarted, something like this
spring.kafka.consumer.enable=false
Note: I have multiple KAFKA consumers and the above property should disable all the consumers in the project.
do we have any existing properties which would disable KAFKA consumer being autostarted without having to make any changes to the existing code?
There is no standard out-of-the-box property; you have to provide your own.
autoStartup="${should.start:true}"
will start the container if property should.start is not present.
EDIT
Just add something like this in your application.
#Component
class Customizer {
Customizer(AbstractKafkaListenerContainerFactory<?, ?, ?> factory,
#Value("${start.containers:true}") boolean start) {
factory.setAutoStartup(start);
}
}
start:
containers: false
I have a Rabbit MQ Micronaut Messaging-Driven application. The application only contains the Consumer side, Producer side is on another REST API application.
Now I want to perform JUnit 5 testing with the consumer side only. Trying to get the best idea to test the Messaging-Driven application that contains only the Rabbit MQ Listener
#RabbitListener
public record CategoryListener(IRepository repository) {
#Queue(ConstantValues.ADD_CATEGORY)
public CategoryViewModel Create(CategoryViewModel model) {
LOG.info(String.format("Listener --> Adding the product to the product collection"));
Category category = new Category(model.name(), model.description());
return Single.fromPublisher(this.repository.getCollection(ConstantValues.PRODUCT_CATEGORY_COLLECTION_NAME, Category.class)
.insertOne(category)).map(success->{
return new CategoryViewModel(
success.getInsertedId().asObjectId().getValue().toString(),
category.getName(),
category.getDescription());
}).blockingGet();
}
}
After some research, I found that we can use Testcontainers for integration testing, In my case, the Producer and receiver are on a different server. So do I need to create RabbitClient for each RabbitListener in the test environment or is there any way to mock RabbitClient
#MicronautTest
#Testcontainers
public class CategoryListenerTest {
#Container
private static final RabbitMQContainer RABBIT_MQ_CONTAINER = new RabbitMQContainer("rabbitmq")
.withExposedPorts(5672, 15672);
#Test
#DisplayName("Rabbit MQ container should be running")
void rabbitMqContainerShouldBeRunning() {
Assertions.assertTrue(RABBIT_MQ_CONTAINER.isRunning());
}
}
What is the best way to perform functional tests of Micronaut Messaging-Driven Application? In this question, I have a PRODUCER on another application. So I can't inject a PRODUCER client. How can I test this function on the LISTENER side?
Create producers with #RabbitClient or use the java api directly
I'm going to use StateRestoreListener with Spring Cloud Kafka Streams binder.
I need to monitor the restoration progress of fault-tolerant state stores of my applications.
There is example in confluent https://docs.confluent.io/current/streams/monitoring.html#streams-monitoring-runtime-status .
In order to observe the restoration of all state stores you provide
your application an instance of the
org.apache.kafka.streams.processor.StateRestoreListener interface. You
set the org.apache.kafka.streams.processor.StateRestoreListener by
calling the KafkaStreams#setGlobalStateRestoreListener method.
The first problem is getting the Kafka Streams from the app. I solved this problem with using
StreamsBuilderFactoryBean streamsBuilderFactoryBean = context.getBean("&stream-builder-process", StreamsBuilderFactoryBean.class);
KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
The second problem is setting StateRestoreListener to KafkaStreams, because I get error
java.lang.IllegalStateException: Can only set
GlobalStateRestoreListener in CREATED state. Current state is: RUNNING
Is it possible to use StateRestoreListener in Spring Cloud Kafka Streams binder?
Thanks
You can do that by using a StreamsBuilderFactoryBeanCustomizer that gives you access to the underlying KafkaStreams object. If you are using binder versions 3.0 or above, this is the recommended approach. For e.g., you can provide the following bean in your application and customize it with the GlobalStateRestoreListener.
#Bean
public StreamsBuilderFactoryBeanCustomizer streamsBuilderFactoryBeanCustomizer() {
return factoryBean -> {
factoryBean.setKafkaStreamsCustomizer(new KafkaStreamsCustomizer() {
#Override
public void customize(KafkaStreams kafkaStreams) {
kafkaStreams.setGlobalStateRestoreListener(...);
}
});
};
}
This blog has more details on this strategy.
I wanted to configure exclusive consumer for ActiveMQ with Spring boot
Configuring with java is easy
queue = new ActiveMQQueue("TEST.QUEUE?consumer.exclusive=true");
consumer = session.createConsumer(queue);
But with Spring boot, listener is configured as below.
#JmsListener(destination = "TEST.QUEUE", containerFactory = "myFactory")
public void receiveMessage(Object message) throws Exception {
......
}
Now, how to make this exclusive consumer? Does the below work?
#JmsListener(destination = "TEST.QUEUE?consumer.exclusive=true", containerFactory = "myFactory")
public void receiveMessage(Object message) throws Exception {
......
}
Yes, it's working this way.
Just set a breakpoint to the org.apache.activemq.command.ActiveMQQueue constructor and run your application in debug mode.
You will see that Spring Boot is calling
new ActiveMQQueue("TEST.QUEUE?consumer.exclusive=true") which corresponds to the official ActiveMQ documentation:
https://activemq.apache.org/exclusive-consumer
Moreavor you can go to the ActiveMQ admin and browse the active consumers of this queue: you will now see that the exclusive flag is set to true for your consumer.