KafkaClient doesn't consume messages if one consumer in group is anuthorized - java

I faced an issue when one consumer in the consumer group is unauthorized. As a result, all consumers in the same consumer group stop consuming messages from ALL topics. I will describe the isolated example but want to notice that it is a syntetich example, my services don't read only one topic per instance.
I have two services: serviceA and serviceB. Both services have the same consumer group id, but serviceA consumes messages only from topicA and serviceB - from topicB. Everything is fine when both topics don't have ACL rules. But when I add ACL for topicA (only on kafka side, the app knows nothing about ACL), serviceA stops consuming messages (it is expected), but at the same time serviceB also stops consuming messages from topicB with the following logs
2023-02-17T11:01:38.134+03:00 WARN 69262 --- [ Thread-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-my_app_2-1, groupId=my_app_2] Error while fetching metadata with correlation id 399 : {first_topic=TOPIC_AUTHORIZATION_FAILED}
2023-02-17T11:01:38.135+03:00 ERROR 69262 --- [ Thread-1] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer-my_app_2-1, groupId=my_app_2] Topic authorization failed for topics [first_topic]
As I understood, it happens because kafka-client in serviceB gets metadata( by groupId?), sees that topicA(just to remember, serviceB consumes only topicB and knows nothing about topicA) is unauthorized, and don't read messages from kafka.
My questions are:
do I understand right that there is no way to make consumers in one consumer group ignore authorized error on one topic?
If it is impossible, is there exist some way to understand that one topic in consumer is unauthorized, except UnathorizedEception?
I created a project for reproducing the behavior https://github.com/makcpopTwo/kafkaExample . I tried to change consumer configuration but did find the solution.
I also tried spring-kafka consumer behavior: it also works in the same way but Spring closes the container if UnathorizedException was received

Related

Camel consumer route with input from another consumer route

I have a kafka consumer route from where I get some data.
from("Kafka:foo?brokers=localhost:9092")
Once I receive data from the consumer, use that data in the topic name for a paho mqtt consumer.
from("paho:#?brokerUrl=tcp://localhost:1883")
I'm not able to figure out how to set the dynamic header CamelMqttTopic, from first consumer, as both seems independent flows. I'm using camel with Spring framework. Excuse me if my basic camel understanding is flawed.
You can override the MQTT topic using the CamelPahoOverrideTopic message header with a value being the Kafka topic accessed through the kafka.TOPIC message header:
from("kafka:foo?brokers=localhost:9092")
.setHeader(PahoConstants.CAMEL_PAHO_OVERRIDE_TOPIC, simple("${headers[kafka.TOPIC]"))
.to("paho:#?brokerUrl=tcp://localhost:1883");

Kafka consumer in test only works with "auto.offset.reset"="earliest"

I'm struggling to understand my Kafka consumer behaviours in some integration tests.
I have a Spring boot service which uses a default, autowired KafkaTemplate<String, String> to produce messages to a topic. In my integration tests, I create a KafkaConsumer in each test:
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(
Map.of( ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, KAFKA_CONTAINER.getBootstrapServers(),
ConsumerConfig.GROUP_ID_CONFIG, "test-consumer-group-" + UUID.randomUUID(),
ConsumerConfig.GROUP_INSTANCE_ID_CONFIG, UUID.randomUUID().toString() ),
new StringDeserializer(), new StringDeserializer() );
consumer.subscribe( topics );
return consumer;
with the intent of having a test flow that looks something like:
Create a new consumer for the topics we're testing
Perform action under test which sends messages to some topics
Poll the topics of interest and verify the messages are there
Close consumer
My expectation was that since the default behaviour of a new consumer is to have auto.offset.reset set to latest I would only get messages sent after I create the consumer, which looks fine in this case. However my consumer never receives any messages! I have to set the consumer to earliest - but this is problematic since I don't want messages created by other tests interfering.
The messages don't have any kind of unique identifier on them, which makes consuming the entire topic each time a tricky proposition in terms of test verifications.
I've tried various permutations of auto committing, polling before running the test but after subscribing, manual syncs but nothing seems to work - how can manage my test lifecycle as described above (or is it not possible)?
The kafka instance is managed using TestContainers in case that's relevant.

Should i create NewTopics in each service spring kafka?

I'm using Kafka for sending messages between services. I use NewTopic bean for configuring number of partitions, for example:
#Bean
fun kafkaTopic(kafkaProperties: KafkaProperties): NewTopic = NewTopic(
kafkaProperties.topics.schedulerCalculationTopic.name,
kafkaProperties.topics.schedulerCalculationTopic.partitions,
1
)
My question is simple, should i add this bean into consumer service and producer service or only in one of them?
I would put it in the producer service and then consider the producer as 'owner' of those topics.
But it get a bit complicated if you have a scenario if you would have several producers to the same topic(s).
If you are not creating the topic on the fly, the best practice is to create topic before reading/writing to it.
Rationale is to prevent brokers to create topic whenever they receive metadata fetch request or consume request with the same topic name. Otherwise, if the consumer starts before the producer, you might end up wrong number of partition. (Broker will create your topic with default number of partitions setting.)

Difference between KafkaTemplate and KafkaProducer send method?

My question is in a spring boot microservice using kafka what is appropriate to use KafkaTemplate.send() or KafkaProducer.send()
I have used KafkaConsumer and not KafkaListner to poll the records because KafkaListner was fetching the records as and when they were coming to the topics, I wanted the records to be polled periodically based on business needs.
Have gone through the documentation of KafkaProducer https://kafka.apache.org/10/javadoc/org/apache/kafka/clients/producer/KafkaProducer.html
and Spring KafkaTemplate
https://docs.spring.io/spring-kafka/reference/html/#kafka-template
I am unable to make a decision like what is ideal to use or atleast the reason of using one over the other is unclear?
What my need is I want the operation to be sync i.e. I want to know if the published happened successfully or not because If the record is not delivered I need to retry publishing.
Any help will be appreciated.
For your first question, which one should I use kafka Template or Kafka producer?
The Kafka Producer is defined in Apache Kafka. The KafkaTemplate is Spring's implementation of it (although it does not implement Producer
directly) and so it provides more methods for you to use.
Read this link::
What is the difference between Kafka Template and kafka producer?
For retry mechanism, in case of failure in publishing.
I have answered this in another question.
The acks parameter control how many partition replicas must receive
the record before the producer can consider the write successful.
There are 3 values for the acks parameter:
acks=0, the producer will not wait for a reply from the broker before
assuming the message sent successfully.
acks=1, the producer will receive a successful response from the
broker the moment the leader replica received the message. If the
message can't be written to the leader, the producer will receive an
error response and can retry.
acks=all, the producer will receive a successful response from the
broker once all in-sync replicas received the message.
Best way to configure retries in Kaka Producer

Can producers have the same clientId and publish to a topic in Artemis?

I was wondering if it's possible to have multiple producers using the same clientId to send messages to a durable topic. And on the consuming side, what would happen if the clientID is the same as the producer side but the subscription name is different?
E.g. The producer has a clientId of 123abc and sends messages to a durable topic. A consumer is subscribed to this durable topic and this consumer has a clientId of 123abc but also a subscriptionName of abc123? Would the consumer still be able to pick up the message? What would happened if I bring another consumer in the mix?
Section 6.1.2 of the JMS 2 specification states:
By definition, the client state identified by a client identifier can be ‘in use’ by only one client at a time.
By "client" the specification really means "connection." Therefore, the same client identifier can only be in use by one connection at a time. So if you create multiple producers from the same connection that's OK. However, creating multiple connections with the same client ID will fail before you even get to the point where you can create a producer as the broker will validate the client ID when the connection is created.
That said, there's no real point in setting the client ID on a connection that's just used for producing messages. Section 6.1.2 of the JMS 2 specification also states:
The only use of a client identifier defined by JMS is its mandatory use in identifying an unshared durable subscription or its optional use in identifying a shared durable or non-durable subscription.
Therefore, it's not really necessary to set the client ID unless you're creating an unshared durable subscription or possibly a shared durable or non-durable subscription.
Two subscribers cannot have the same clientid: when they both try to connect to the broker, the second will go into exception. However, you can override the clientid: using TomEE or Tomcat you can add a simple line to system.properties file like this:
<classname>.activation.clientId=<newclientid>
No problem for producers.

Categories