my exchange and dlq are not being created. I have the following in my YML below. I do get an anonymous queue created, but no messages are posted either. Any thoughts.
rabbit:
bindings:
documentrequest-policyinqadapter:
producer:
bindingRoutingKey: documentrequest.adapter.*.*.*.policyinq.req
routing-key-expression: headers['events-type']
consumer:
autoBindDlq: true
republishToDlq: true
requeueRejected: false
bindingRoutingKey: documentrequest.adapter.*.*.*.policyinq.req
deadLetterQueueName: pi-adapter-dead-letter-queue
deadLetterExchange: PI-DocumentRequestService-AdapterService-Exchange-dlx
deadLetterRoutingKey: documentrequest.adapter.*.*.*.policyinq.req
maxAttempts: 1
maxConcurrency: 10
Dead letter queues are not supported with anonymous subscriptions; you must add a group to the consumer binding.
Related
I'm struggling to find in google/spring docs any way to make the Spring Container properties set in the yml file instead of programatically.
I want to set the property "idleBetweenPolls" for one specific topic + consumer.
I've achieved it programatically (it is currently applying to all topics/consumers, I'd need to add some conditions there).
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer() {
return (container, dest, group) -> {
log.info("Container : {}, dest: {}, group: {}", container, dest, group);
container.getContainerProperties().setIdleBetweenPolls(15000);
};
}
How can I set that at yml level? I tried the config below with no success:
spring.cloud.stream:
kafka:
binder:
autoCreateTopics: true
autoAddPartitions: true
healthTimeout: 10
requiredAcks: 1
minPartitionCount: 1
replicationFactor: 1
headerMapperBeanName: customHeaderMapper
bindings:
command-my-setup-input-channel:
consumer:
autoCommitOffset: false
batch-mode: true
startOffset: earliest
resetOffsets: true
converter-bean-name: batchConverter
ackMode: manual
idleBetweenPolls: 90000 # not working
configuration:
heartbeat.interval.ms: 1000
max.poll.records: 2
max.poll.interval.ms: 890000
value.deserializer: com.xpto.MySetupDTODeserializer
bindings:
command-my-setup-input-channel:
destination: command.my.setup
content-type: application/json
binder: kafka
configuration:
value:
deserializer: com.xpto.MySetupDTODeserializer
consumer:
batch-mode: true
startOffset: earliest
resetOffsets: true
Version: spring-cloud-stream 3.0.12.RELEASE
Boot does not support all container properties in yml, just some of the common ones; your current solution is correct.
You could open a new feature suggestion against Boot.
I am using "spring-cloud-stream-binder-kafka" for my consumer, having AVRO topic.
it's new consumer with new consumer group. After running the application I am getting this log "Found no committed offset for partition 'topic-name-x'". I read that it is expected to get this log for new consumer groups but even after that it's not consuming any messages.
Having below config for consumer:
spring:
cloud:
function:
definition: input
stream:
bindings:
input-in-0:
destination: topic-name
group: group-name
kafka:
binder:
autoCreateTopics: false
brokers: broker-server
configuration:
security.protocol: SSL
ssl.truststore.type: JKS
ssl.truststore.location:
ssl.truststore.password:
ssl.keystore.type: JKS
ssl.keystore.location:
ssl.keystore.password:
ssl.key.password:
request.timeout.ms:
max.request.size:
consumerProperties:
key.deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
value.deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
schema.registry.url: url
basic.auth.credentials.source: USER_INFO
basic.auth.user.info: ${AUTH_USER}:${AUTH_USER_PASS}
specific.avro.reader: true
spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
spring.deserializer.value.delegate.class: io.confluent.kafka.serializers.KafkaAvroDeserializer
bindings:
input-in-0:
consumer:
autoCommitOffset: false
Why it's not able to consumer the messages??? I tried setting resetOffsets: true, startOffset: earliest. But still no luck.
Probably I am missing a setting or two here but with KafkaListenerContainerFactory setup for ackMode as manual, I see messages are streaming through #KafkaListener method when I print them on receive. We are not acknowledging the message anywhere in the application.
Appreciate if anyone can point out where the issue is. Thanks in advance.
Using spring kafka version -- 2.3.0.RELEASE
ListenerContainerFactory
KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<GenericData.Record, GenericData.Record>>
kafkaListenerContainerFactory(
ConsumerFactory<GenericData.Record, GenericData.Record> consumerFactory) {
requireNonNull(consumerFactory, "consumerFactory must not be null");
ConcurrentKafkaListenerContainerFactory<GenericData.Record, GenericData.Record> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory);
factory.setBatchListener(true);
factory.setConcurrency(4);
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL);
return factory;
}
Receives the messages here
public List<InventoryRecord> onMessage(#Payload List<ConsumerRecord<byte[],
GenericArray<GenericRecord>>> consumerRecords, Acknowledgment acknowledgment) {
application.yml
kafka:
consumer:
key-deserializer: org.apache.kafka.common.serialization.ByteArrayDeserializer
value-deserializer: io.confluent.kafka.serializers.KafkaAvroDeserializer
heartbeat-interval: 50000ms
client-id: card-processor-consumer
group-id: card-processor-consumer
max-poll-records: 24
properties:
basic:
auth:
credentials:
source: SASL_INHERIT
security.protocol: SASL_SSL
sasl.mechanism: SCRAM-SHA-512
sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username=zzzzzzz password=xxxxxxxxx;
request.timeout.ms: 1800000
session.timeout.ms: 360000
max.poll.interval.ms: 290000
listener:
concurrency: 4
ack-mode: manual
and my understanding is the next message will not arrive until the last message offset is committed.
Your understanding is incorrect.
Kafka maintains two pointers for each group/partition - the current position and the committed offset.
The current position is the next record that will be read on the next poll. The committed offset is the next record that will be read by this consumer when it is next restarted, or a rebalance occurs. i.e. the position is reset to the committed offset when the consumer leaves the group, or a partition is revoked.
There is no other relationship between these pointers; committing the offset has no bearing on fetching the next record from the current position.
When I got a message from the queue and if an exception was thrown, I wanted to get the message again.
So, I create my consumer with a DLQ queue:
spring:
cloud:
stream:
bindings:
to-send-output:
destination: to-send-event
producer:
required-groups:
- to-send-event-group
to-send-input:
destination: to-send-event
group: to-send-event-group
consumer:
max-attempts: 1
requeueRejected: true
rabbit:
bindings:
# Forever retry
to-send-input:
consumer:
autoBindDlq: true
dlqTtl: 5000
dlqDeadLetterExchange:
maxConcurrency: 300
frameMaxHeadroom: 25000 # I added this as in the documentation
I added the property frameMaxHeadroom: 25000 as it says in the documentation, but it still does not work.
My springCloudVersion="Hoxton.RELEASE".
My dependency:
dependencies {
...
implementation "org.springframework.cloud:spring-cloud-starter-stream-rabbit"
...
}
In the repository on GitHub, I see the frameMaxHeadroom property in the property file.
I see that the code reduces the stack trace by the value I set (from a variable frameMaxHeadroom). I expected that I wasn't decreasing the stack trace, but increasing the value for the headers for the consumer, as written in the documentation. Why isn't it working as I wait?
frameMax is negotiated between the AMQP client and server; all headers must fit in one frame. You can increase it with the broker configuration.
Stack traces can be large and can easily exceed the frameMax alone; in order to leave room for other headers, the framework leaves at least 20,000 bytes (by default) free for other headers, by truncating the stack trace header if necessary.
If you are exceeding your frameMax, you must have other large headers. You need to increase the headroom to allow for those headers, so the stack trace is truncated further.
I'm using spring-cloud-stream for communicating between microservices. I have the following predefined setup in the rabbit mq broker.
"first" -> exchange of type Topic which is bound to Queue (name="user.create",x-dead-letter-exchange="first.dlx")
"first.dlx" -> dead letter exchange of type Topic
and the following configuration file:
spring:
cloud:
stream:
bindings:
consumer-input:
group: user.create
destination: first
contentType: application/json
binder: rabbit
binders:
rabbit:
type: rabbit
rabbit:
bindings:
consumer-input:
consumer:
acknowledgeMode: manual
declareExchange: false
queueNameGroupOnly: true
bindQueue: false
deadLetterExchange: first.dlx
autoBindDlq: true
deadLetterRoutingKey: user.create.dlq
and when I start the application says :
[AMQP Connection 127.0.0.1:5672] ERROR o.s.a.r.c.CachingConnectionFactory - Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - inequivalent arg 'type' for exchange 'first.dlx' in vhost '/': received 'direct' but current is 'topic', class-id=40, method-id=10)
because rabbit mq try to declare dlx of type "direct". Here is the link of the repo.
so my question ... is there any way to tell rabbit mq to declare dlx of other type than "direct" something like property name: "deadLetterExchangeType: topic"? or not to declare dlx at all.
Any other suggestion will be very helpful.
It is not currently possible to define the DLX exchange type or prevent its declaration. Please open an issue against the binder.
Just allowing the type to be specified might not be enough, since it might have other incompatible arguments. We should probably add declareDlx, similar to declareExchange.