How to delete dynamically created consumer groups in spring boot app - java

I have multiple instances of my spring boot app consuming from a kafka topic. Since I want all instances to get data from all partitions of this topic, I assigned different consumers groups for each instances which would be created dynamically when starting this application.
#Configuration
#EnableKafka
public class KafkaStreamConfig {
#Bean("provisioningStreamsBuilderFactoryBean")
public StreamsBuilderFactoryBean myStreamsBuilderFactoryBean() {
String myCGName = "MY-CG-" + UUID.randomUUID().toString();
Properties streamsConfiguration = new Properties();
streamsConfiguration.put(APPLICATION_ID_CONFIG, myCGName); // setting consumer group name
// setting other props
StreamsBuilderFactoryBean streamsBuilderFactoryBean = new StreamsBuilderFactoryBean();
streamsBuilderFactoryBean.setStreamsConfiguration(streamsConfiguration);
return streamsBuilderFactoryBean;
}
}
So every time an instance restarts or a new instance is created, a new consumer group is created. And this's the consumer which reads from my topic.
#Component
public class MyConsumer {
#Autowired
private StreamsBuilder streamsBuilder;
#PostConstruct
public void consume() {
final KStream<String, GenericRecord> events = streamsBuilder.stream("my-topic");
events
.selectKey((key, record) -> record.get("id").toString())
.foreach((id, record) -> {
// some computations with the events consumed
});
}
}
Now because of these dynamically created consumer groups stay on, and since they're not used in my application once an instance restarts, these don't consume messages anymore and show a lot of lag and hence give rise to false alerts.
So I'd like to delete these consumer groups when the application shuts down with Kafka's AdminClient api. I was thinking of trying to delete it in a shutdown hook like in a method annotated with #PreDestroy inside MyConsumer class like this:
#PreDestroy
public void destroyMYCG() {
try (AdminClient admin = KafkaAdminClient.create(properties)) {
DeleteConsumerGroupsResult deleteConsumerGroupsResult = admin.deleteConsumerGroups(Collections.singletonList(provGroupName));
KafkaFuture<Void> future = deleteConsumerGroupsResult.all();
future.whenComplete((aVoid, throwable) -> {
System.out.println("EXCEPTION :: " + ExceptionUtils.getStackTrace(throwable));
});
}
System.out.println(getClass().getCanonicalName() + " :: DESTROYING :: " + provGroupName);
}
but I'm getting this exception if I tried that and consumer groups still shows up in the list of consumer groups:
org.apache.kafka.common.errors.TimeoutException: The AdminClient thread is not accepting new calls.
Can someone please help me with this?

Using UUID as the consumer goup name is terrible.You can definition a final str as consumer goup name for each spring boot app.

IMHO this is logical mistake to create consumer group with UUID. Logically if the same process restarts, it is the same app - the same consumer. You will solve your problem giving good consumer groups names related to what logically do the app.
I would delete consumer groups on the server side, having "GC" set on certain level of lag.
Again consumer group is not application id. It is not intended to be randomly created.
And honestly spoken I not sure what kind of problem do you solve doing this.
Because in fact by saying that consumer group is random, you say my code is doing random things and I have no clue what happens in message processing.
We have very complex Kafka message processing and always there is better or worse name for the process, but at least exist one, which is not random.

Related

Two instances consuming from topic

I have deployed two instances of an application. Both applications runs the same code and consumes from the same topic.
#KafkaListener( offsetReset = OffsetReset.EARLIEST, offsetStrategy = OffsetStrategy.DISABLED )
public class AppConsumer implements ConsumerRebalanceListener, KafkaConsumerAware {
#Topic("topic")
public void consumeAppInfo(#KafkaKey String name, #Body #Nullable String someString) {
...
}
}
I have a problem where only one of the applications consumes the message. The topic has only one partition, partition 0, which i believe is default.
I have tried to add group-id and threads to the KafkaListener. This seems to work sometimes and other time not.
#KafkaListener(groupId="myGroup", threads=10)
What is the simplest solution to getting both applications to consume the same message?
You could not do the group and just give each application a separate consumer id each consumer consumes all messages (unless they are also assigned to a group).
Groups are used for parallel processing of messages each consumer in a group get assigned to a partition for processing messages.
More info => difference between groupid and consumerid in Kafka consumer
In kafka, only one consumer within consumer group is assigned to each partition. If you want to consume the data from the same partition by different applications, you need to specify different consumer groups for each different application / instance.

Spring #KafkaListener and concurrency

I am working with spring boot + spring #KafkaListener. And the behavior I expect is: my kafka listener reads messages in 10 threads. So that, if one of threads hangs, other messages are would continue reading and handling messages.
I defined bean of
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConcurrentKafkaListenerContainerFactoryConfigurer configurer,
ConsumerFactory<Object, Object> kafkaConsumerFactory)
{
ConcurrentKafkaListenerContainerFactory<Object, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
configurer.configure(factory, kafkaConsumerFactory);
factory.getContainerProperties().setMissingTopicsFatal(false);
factory.getContainerProperties().setCommitLogLevel(LogIfLevelEnabled.Level.INFO);
return factory;
}
And spring boot config:
spring.kafka.listener.concurrency=10
I see that all configs work, I see my 10 threads in jmx:
But then I make such test:
#KafkaListener(topics = {
"${topic.name}" }, clientIdPrefix = "${kafka.client.id.prefix}", idIsGroup = false, id = "${kafka.listener.name}", containerFactory = "kafkaListenerContainerFactory")
public void listen(ConsumerRecord<String, String> record)
{
if(record.getVersion() < 3) {
try {
Thread.sleep(20000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
else
System.out.println("It works!");
}
If version is < 3, then hang, otherwise - work.
I send 3 messages with version 1,2 and 3. I expect that messages with version 1 and 2 will hang, but version 3 will be processed at the time it comes to listener. But unfortunately message with version 3 waits for messages 1 and 2 before starts its processing.
Maybe my expectations are not true and this is a right behavior of kafka listener.
Please help me to deal with kafka concurrency, why does it act like that?
Kafka doesn't work that way; you need at least as many partitions as consumers (controlled by concurrency in the spring container).
Also, only one consumer (in a group) can consume from a partition at a time so, even if you increase the partitions, records in the same partition behind the "stuck" consumer will not be received by other consumers.
If you want to have failover Kafka, you must spin up more instances of your application.
Example: you have a topic named test with 1 partition, you will create 2 instances of your app with the same Kafka group. One instance will process your data, the other will wait and start processing messages in case the first instance crashes. Same if you have N partitions with N + 1 or 2 or 3 instances of your application. Also, every instance will only have one consumer thread.
For more info about it search on Google: Kafka Consumer Groups.

(Cleanly?) Consuming Messages in Spring Apache Kafka

I've put together a sample application to test out Spring Apache Kafka. I am having success consuming sending the messages through the KafkaTemplate and even able to output the messages being sent from the template using ConsumerRecord so I know the consumer is receiving the data but I'm trying to figure out one thing.
How can I consume the messages without having to throw some extra value in there when creating my consumerRecord bean?
I'm wondering if I'm missing something where I wouldn't have to create this bean at all but in this guide I used I don't understand where ConsumerRecord is used in the listen method and where the method even gets used at all?
In my config class I have created a consumerRecord bean:
#Bean
public ConsumerRecord<String,String> consumerRecord(){
return new ConsumerRecord<>("my-topic", 1, 1L, "key", "value");
}
with essentially dummy properties aside from the topic I specify being my-topic.
I am sending messages through the template to be consumed like so:
void sendMsg(){
ProducerRecord<String,String> producerRecord = new ProducerRecord<>("my-topic", "Sample Data");
try {
latch.await(1000L, TimeUnit.SECONDS);
} catch (InterruptedException e) {
e.printStackTrace();
}
template.send(producerRecord);
listen(consumerRecord);
}
And I'm printing out all the values being sent to the topic to the logger with this method:
private static final String topicName = "my-topic";
#KafkaListener(topics = topicName)
private void listen(ConsumerRecord consumerRecord){
logger.info("Consumer Record Value:::: " + consumerRecord.value());
latch.countDown();
}
My concern is that I don't know what's happening with that "value" I put at the end of the consumer record bean. In a real world application I wouldn't want some random dummy value being consumed so how can I avoid this and just focus on the data being sent through the KafkaTemplate?
I used the Spring Docs as my point of reference when putting together the application (linked above earlier when referencing the guide).
*edit: If anyone knows?
Can you pass the value straight from the producer? But surely it wouldn’t even go into Kafka then?

Spring Kafka - Subscribe new topics during runtime

I'm using the annotation #KafkaListener to consume topics on my application. My issue is that if I create a new topic in kafka but my consumer is already running, it seems the consumer will not pick up the new topic, even if it matches with the topicPattern I'm using. Is there a way to "refresh" the subscribed topics periodically, so that new topics are picked up and rebalanced upon my running consumers?
I'm using Spring Kafka 1.2.2 with Kafka 0.10.2.0.
Regards
You can't dynamically add topics at runtime; you have to stop/start the container to start listening to new topics.
You can #Autowire the KafkaListenerEndpointRegistry and stop/start listeners by id.
You can also stop/start all listeners by calling stop()/start() on the registry itself.
Actually it is possible.
It worked for me with Kafka 1.1.1.
Under the hood Spring uses consumer.subscribe(topicPattern) and now it is totally depends on Kafka lib whether the message will be seen by consumer.
There is consumer config property called metadata.max.age.ms which is 5 mins by default. It basically controls how often client will go to broker for the updates, meaning new topics will not be seen by consumer for up to 5 minutes. You can decrease this value (e.g. 20 seconds) and should see KafkaListener started to pick messages from new topics quicker.
The following way works well for me.
ContainerProperties containerProps = new ContainerProperties("topic1", "topic2");
KafkaMessageListenerContainer<Integer, String> container = createContainer(containerProps);
containerProps.setMessageListener(new MessageListener<Integer, String>() {
#Override
public void onMessage(ConsumerRecord<Integer, String> message) {
logger.info("received: " + message);
}
});
container.setBeanName("testAuto");
container.start();
ref: http://docs.spring.io/spring-kafka/docs/1.0.0.RC1/reference/htmlsingle/
In practical application, I use a ConcurrentMessageListenerContainer instead of single-threaded KafkaMessageListenerContainer.

How can I handle multiple messages concurrently from a JMS topic (not queue) with java and spring 3.0?

Note that I'd like multiple message listeners to handle successive messages from the topic concurrently. In addition I'd like each message listener to operate transactionally so that a processing failure in a given message listener would result in that listener's message remaining on the topic.
The spring DefaultMessageListenerContainer seems to support concurrency for JMS queues only.
Do I need to instantiate multiple DefaultMessageListenerContainers?
If time flows down the vertical axis:
ListenerA reads msg 1 ListenerB reads msg 2 ListenerC reads msg 3
ListenerA reads msg 4 ListenerB reads msg 5 ListenerC reads msg 6
ListenerA reads msg 7 ListenerB reads msg 8 ListenerC reads msg 9
ListenerA reads msg 10 ListenerB reads msg 11 ListenerC reads msg 12
...
UPDATE:
Thanks for your feedback #T.Rob and #skaffman.
What I ended up doing is creating multiple DefaultMessageListenerContainers with concurrency=1 and then putting logic in the message listener so that only one thread would process a given message id.
You don't want multiple DefaultMessageListenerContainer instances, no, but you do need to configure the DefaultMessageListenerContainer to be concurrent, using the concurrentConsumers property:
Specify the number of concurrent
consumers to create. Default is 1.
Specifying a higher value for this
setting will increase the standard
level of scheduled concurrent
consumers at runtime: This is
effectively the minimum number of
concurrent consumers which will be
scheduled at any given time. This is a
static setting; for dynamic scaling,
consider specifying the
"maxConcurrentConsumers" setting
instead.
Raising the number of concurrent
consumers is recommendable in order to
scale the consumption of messages
coming in from a queue. However, note
that any ordering guarantees are lost
once multiple consumers are
registered. In general, stick with 1
consumer for low-volume queues.
However, there's big warning at the bottom:
Do not raise the number of concurrent consumers for a topic.
This would lead to concurrent
consumption of the same message, which
is hardly ever desirable.
This is interesting, and makes sense when you think about it. The same would occur if you had multiple DefaultMessageListenerContainer instances.
I think perhaps you need to rethink your design, although I'm not sure what I'd suggest. Concurrent consumption of pub/sub messages seems like a perfectly reasonable thing to do, but how to avoid getting the same message delivered to all of your consumers at the same time?
At least in ActiveMQ what you want is totally supported, his name is VirtualTopic
The concept is:
You create a VirtualTopic (Simply creating a Topic using the prefix VirtualTopic. ) eg. VirtualTopic.Color
Create a consumer subscribing to this VirtualTopic matching this pattern Consumer.<clientName>.VirtualTopic.<topicName> eg. Consumer.client1.VirtualTopic.Color, doing it, Activemq will create a queue with that name and that queue will subscribe to VirtualTopic.Color then every message published to this Virtual Topic will be delivered to client1 queue, note that it works like rabbitmq exchanges.
You are done, now you can consume client1 queue like every queue, with many consumers, DLQ, customized redelivery policy, etc.
At this point I think you understood that you can create client2, client3 and how many subscribers you want, all of them will receive a copy of the message published to VirtualTopic.Color
Here the code
#Component
public class ColorReceiver {
private static final Logger LOGGER = LoggerFactory.getLogger(MailReceiver.class);
#Autowired
private JmsTemplate jmsTemplate;
// simply generating data to the topic
long id=0;
#Scheduled(fixedDelay = 500)
public void postMail() throws JMSException, IOException {
final Color colorName = new Color[]{Color.BLUE, Color.RED, Color.WHITE}[new Random().nextInt(3)];
final Color color = new Color(++id, colorName.getName());
final ActiveMQObjectMessage message = new ActiveMQObjectMessage();
message.setObject(color);
message.setProperty("color", color.getName());
LOGGER.info("status=color-post, color={}", color);
jmsTemplate.convertAndSend(new ActiveMQTopic("VirtualTopic.color"), message);
}
/**
* Listen all colors messages
*/
#JmsListener(
destination = "Consumer.client1.VirtualTopic.color", containerFactory = "colorContainer"
selector = "color <> 'RED'"
)
public void genericReceiveMessage(Color color) throws InterruptedException {
LOGGER.info("status=GEN-color-receiver, color={}", color);
}
/**
* Listen only red colors messages
*
* the destination ClientId have not necessary exists (it means that his name can be a fancy name), the unique requirement is that
* the containers clientId need to be different between each other
*/
#JmsListener(
// destination = "Consumer.redColorContainer.VirtualTopic.color",
destination = "Consumer.client1.VirtualTopic.color",
containerFactory = "redColorContainer", selector = "color='RED'"
)
public void receiveMessage(ObjectMessage message) throws InterruptedException, JMSException {
LOGGER.info("status=RED-color-receiver, color={}", message.getObject());
}
/**
* Listen all colors messages
*/
#JmsListener(
destination = "Consumer.client2.VirtualTopic.color", containerFactory = "colorContainer"
)
public void genericReceiveMessage2(Color color) throws InterruptedException {
LOGGER.info("status=GEN-color-receiver-2, color={}", color);
}
}
#SpringBootApplication
#EnableJms
#EnableScheduling
#Configuration
public class Config {
/**
* Each #JmsListener declaration need a different containerFactory because ActiveMQ requires different
* clientIds per consumer pool (as two #JmsListener above, or two application instances)
*
*/
#Bean
public JmsListenerContainerFactory<?> colorContainer(ActiveMQConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
final DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setConcurrency("1-5");
configurer.configure(factory, connectionFactory);
// container.setClientId("aId..."); lets spring generate a random ID
return factory;
}
#Bean
public JmsListenerContainerFactory<?> redColorContainer(ActiveMQConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
// necessary when post serializable objects (you can set it at application.properties)
connectionFactory.setTrustedPackages(Arrays.asList(Color.class.getPackage().getName()));
final DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setConcurrency("1-2");
configurer.configure(factory, connectionFactory);
return factory;
}
}
public class Color implements Serializable {
public static final Color WHITE = new Color("WHITE");
public static final Color BLUE = new Color("BLUE");
public static final Color RED = new Color("RED");
private String name;
private long id;
// CONSTRUCTORS, GETTERS AND SETTERS
}
Multiple Consumers Allowed on the Same Topic Subscription in JMS 2.0, while this was not the case with JMS 1.1. Please refer:
https://www.oracle.com/technetwork/articles/java/jms2messaging-1954190.html
This is one of those occasions where the differences in transport providers bubble up through the abstraction of JMS. JMS wants to provide a copy of the message for each subscriber on a topic. But the behavior that you want is really that of a queue. I suspect that there are other requirements driving this to a pub/sub solution which were not described - for example other things need to subscribe to the same topic independent of your app.
If I were to do this in WebSphere MQ the solution would be to create an administrative subscription which would result in a single copy of each message on the given topic to be placed onto a queue. Then your multiple subscribers could compete for messages on that queue. This way your app could have multiple threads among which the messages are distributed, and at the same time other subscribers independent of this application could dynamically (un)subscribe to the same topic.
Unfortunately, there's no generic JMS-portable way of doing this. You are dependent on the transport provider's implementation to a great degree. The only one of these I can speak to is WebSphere MQ but I'm sure other transports support this in one way or another and to varying degrees if you are creative.
Here's a possibility:
1) create only one DMLC configured with the bean and method to handle the incoming message. Set its concurrency to 1.
2) Configure a task executor with its #threads equal to the concurrency you desire. Create an object pool for objects which are actually supposed to process a message. Give a reference of task executor and object pool to the bean you configured in #1. Object pool is useful if the actual message processing bean is not thread-safe.
3) For an incoming message, the bean in DMLC creates a custom Runnable, points it to the message and the object pool, and gives it to task executor.
4) The run method of Runnable gets a bean from the object pool and calls its 'process' method with the message given.
#4 can be managed with a proxy and the object pool to make it easier.
I haven't tried this solution yet, but it seems to fit the bill. Note that this solution is not as robust as EJB MDB. Spring e.g. will not discard an object from the pool if it throws a RuntimeException.
Creating a custom task executor seemingly solved the issue for me, w/o duplicate processing:
#Configuration
class BeanConfig {
#Bean(destroyMethod = "shutdown")
public ThreadPoolTaskExecutor topicExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setAllowCoreThreadTimeOut(true);
executor.setKeepAliveSeconds(300);
executor.setCorePoolSize(4);
executor.setQueueCapacity(0);
executor.setThreadNamePrefix("TOPIC-");
return executor;
}
#Bean
JmsListenerContainerFactory<?> topicListenerFactory(ConnectionFactory connectionFactory, DefaultJmsListenerContainerFactoryConfigurer configurer, #Qualifier("topicExecutor") Executor topicExecutor) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setPubSubDomain(true);
configurer.configure(factory, connectionFactory);
factory.setPubSubDomain(true);
factory.setSessionTransacted(false);
factory.setSubscriptionDurable(false);
factory.setTaskExecutor(topicExecutor);
return factory;
}
}
class MyBean {
#JmsListener(destination = "MYTOPIC", containerFactory = "topicListenerFactory", concurrency = "1")
public void receiveTopicMessage(SomeTopicMessage message) {}
}
I've run into the same problem. I'm currently investigating RabbitMQ, which seems to offer a perfect solution in a design pattern they call "work queues." More info here: http://www.rabbitmq.com/tutorials/tutorial-two-java.html
If you're not totally tied to JMS you might look into this. There might also be a JMS to AMQP bridge, but that might start to look hacky.
I'm having some fun (read: difficulties) getting RabbitMQ installed and running on my Mac but think I'm close to having it working, I will post back if I'm able to solve this.
on server.xml configs:
so , in maxSessions you can identify the number of sessions you want.
Came across this question. My configuration is :
Create a bean with id="DefaultListenerContainer", add property name="concurrentConsumers" value="10" and property name="maxConcurrentConsumers" value ="50".
Works fine, so far. I printed the thread id and verified that multiple threads do get created and also reused.

Categories