How to set Spring Container properties at yml level? - java

I'm struggling to find in google/spring docs any way to make the Spring Container properties set in the yml file instead of programatically.
I want to set the property "idleBetweenPolls" for one specific topic + consumer.
I've achieved it programatically (it is currently applying to all topics/consumers, I'd need to add some conditions there).
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer() {
return (container, dest, group) -> {
log.info("Container : {}, dest: {}, group: {}", container, dest, group);
container.getContainerProperties().setIdleBetweenPolls(15000);
};
}
How can I set that at yml level? I tried the config below with no success:
spring.cloud.stream:
kafka:
binder:
autoCreateTopics: true
autoAddPartitions: true
healthTimeout: 10
requiredAcks: 1
minPartitionCount: 1
replicationFactor: 1
headerMapperBeanName: customHeaderMapper
bindings:
command-my-setup-input-channel:
consumer:
autoCommitOffset: false
batch-mode: true
startOffset: earliest
resetOffsets: true
converter-bean-name: batchConverter
ackMode: manual
idleBetweenPolls: 90000 # not working
configuration:
heartbeat.interval.ms: 1000
max.poll.records: 2
max.poll.interval.ms: 890000
value.deserializer: com.xpto.MySetupDTODeserializer
bindings:
command-my-setup-input-channel:
destination: command.my.setup
content-type: application/json
binder: kafka
configuration:
value:
deserializer: com.xpto.MySetupDTODeserializer
consumer:
batch-mode: true
startOffset: earliest
resetOffsets: true
Version: spring-cloud-stream 3.0.12.RELEASE

Boot does not support all container properties in yml, just some of the common ones; your current solution is correct.
You could open a new feature suggestion against Boot.

Related

Continuously Getting "Found no committed offset for partition 'topic-name-x' " for new consumer and consumer group

I am using "spring-cloud-stream-binder-kafka" for my consumer, having AVRO topic.
it's new consumer with new consumer group. After running the application I am getting this log "Found no committed offset for partition 'topic-name-x'". I read that it is expected to get this log for new consumer groups but even after that it's not consuming any messages.
Having below config for consumer:
spring:
cloud:
function:
definition: input
stream:
bindings:
input-in-0:
destination: topic-name
group: group-name
kafka:
binder:
autoCreateTopics: false
brokers: broker-server
configuration:
security.protocol: SSL
ssl.truststore.type: JKS
ssl.truststore.location:
ssl.truststore.password:
ssl.keystore.type: JKS
ssl.keystore.location:
ssl.keystore.password:
ssl.key.password:
request.timeout.ms:
max.request.size:
consumerProperties:
key.deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
value.deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
schema.registry.url: url
basic.auth.credentials.source: USER_INFO
basic.auth.user.info: ${AUTH_USER}:${AUTH_USER_PASS}
specific.avro.reader: true
spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
spring.deserializer.value.delegate.class: io.confluent.kafka.serializers.KafkaAvroDeserializer
bindings:
input-in-0:
consumer:
autoCommitOffset: false
Why it's not able to consumer the messages??? I tried setting resetOffsets: true, startOffset: earliest. But still no luck.

Dead Letter Queue not being created

my exchange and dlq are not being created. I have the following in my YML below. I do get an anonymous queue created, but no messages are posted either. Any thoughts.
rabbit:
bindings:
documentrequest-policyinqadapter:
producer:
bindingRoutingKey: documentrequest.adapter.*.*.*.policyinq.req
routing-key-expression: headers['events-type']
consumer:
autoBindDlq: true
republishToDlq: true
requeueRejected: false
bindingRoutingKey: documentrequest.adapter.*.*.*.policyinq.req
deadLetterQueueName: pi-adapter-dead-letter-queue
deadLetterExchange: PI-DocumentRequestService-AdapterService-Exchange-dlx
deadLetterRoutingKey: documentrequest.adapter.*.*.*.policyinq.req
maxAttempts: 1
maxConcurrency: 10
Dead letter queues are not supported with anonymous subscriptions; you must add a group to the consumer binding.

Spring cloud stream RabbitMq - Set properties from source code

I am using Spring cloud stream with RabbitMQ.
I want to be able to configure message and query properties from source code and not from a property file (as they mention in their docs).
For example with the classic Java Client for RabbitMq i can do something like that to create a queue with the properties i want:
//qName, passive, durable, exclusive auto-delete
channel.queueDeclare("myQueue", true, false, false, , false , null);
Any ideas on how can i achieve the same thing using Spring cloud stream?
Inside of "application.yml" you can add all this values , following is example
spring:
cloud:
stream:
instance-count: 1
bindings:
input:
consumer:
concurrency: 2
maxAttempts: 1
group: geode-sink
destination: jdbc-event-result
binder: rabbit
rabbit:
bindings:
input:
consumer:
autoBindDlq: true
republishToDlq: true
requeueRejected: false
rabbitmq:
username: ur-user-name
password: ur-password
host: rabbitmq-url-replace-here
port: 5672
datasource:
platform: mysql
url: jdbc:mysql-url-replace-here
username: ur-user-name
password: ur-password
driverClassName: com.mysql.jdbc.Driver
datasource:
tomcat:
max-wait: 300
min-idle: 10
max-idle: 100
aggregator:
groupCount: 2
batchSize: 1000
batchTimeout: 1000
Updated :
https://cloud.spring.io/spring-cloud-static/spring-cloud-stream-binder-rabbit/2.2.0.M1/spring-cloud-stream-binder-rabbit.html
https://github.com/spring-projects/spring-xd/blob/master/spring-xd-dirt/src/main/resources/application.yml
After digging in their documentation and with the help of #vaquar khan I found out that the only way to do it is from your property file.
application.yml
spring:
cloud:
stream:
bindings:
queue_name :
destination: queue_name
group: your_group_name
durable-subscription : true
This will declare a durable, non-deleting and non-exclusive queue.

How to set Binders for Spring Cloud Stream Bindings on different RabbitMQ vhosts

I'm trying to set up RabbitMQ with Spring Cloud Stream Support
I have a couple consumers and producers. One of the producers should produce messages to a separate virtual host on a same RabbitMQ instance (later it might be different physical instances).
application.yaml
spring:
cloud:
stream:
binders:
binder1:
type: rabbit
defaultCandidate: false
inheritEnvironment: false
environment:
spring:
rabbitmq:
host: localhost
port: 5672
virtual-host: virtual-host-1
username: guest
password: guest
binder2:
type: rabbit
defaultCandidate: false
inheritEnvironment: false
environment:
spring:
rabbitmq:
host: localhost
port: 5672
virtual-host: virtual-host-2
username: guest
password: guest
bindings:
test:
binder: binder1
coordinates:
destination: coordinates
binder: binder1
events:
destination: events
binder: binder1
events_output:
destination: events
binder: binder1
tasks:
destination: tasks
binder: binder2
The goal is that binding tasks should use vhost virtual-host-2. Other bindings should use vhost virtual-host-1.
However binder value seems to be ignored and default rabbit binder is taken into account with default settings on application startup.
I noticed it while debugging the runtime:
The binder value on each binding is NULL. Although the value is explicitly provided in properties.
If I set defaultCandidate of any of the binders to true then that binder settings will be used as a replacement for default one.
Is something misconfigured?
This is one of the reasons why I don't like yaml. It's hard to track what may be misconfigured. In any event, here is the working example I just tried.
spring:
cloud:
stream:
bindings:
input:
binder: rabbit1
group: vhost1-group
destination: vhost1-queue
output:
binder: rabbit2
destination: vhost2-queue
binders:
rabbit1:
type: rabbit
environment:
spring:
rabbitmq:
host: localhost
virtual-host: vhost1
rabbit2:
type: rabbit
environment:
spring:
rabbitmq:
host: localhost
virtual-host: vhost2
I just copied/pasted your config; fixed some indentation and it worked fine for me...
spring:
cloud:
stream:
binders:
binder1:
type: rabbit
defaultCandidate: false
inheritEnvironment: false
environment:
spring:
rabbitmq:
host: localhost
port: 5672
virtual-host: virtual-host-1
username: guest
password: guest
binder2:
type: rabbit
defaultCandidate: false
inheritEnvironment: false
environment:
spring:
rabbitmq:
host: localhost
port: 5672
virtual-host: virtual-host-2
username: guest
password: guest
bindings:
test:
binder: binder1
tasks:
destination: tasks
binder: binder2
#SpringBootApplication
#EnableBinding(Foo.class)
public class So56462671Application {
public static void main(String[] args) {
SpringApplication.run(So56462671Application.class, args);
}
}
interface Foo {
#Input
MessageChannel test();
#Input
MessageChannel tasks();
}
and
defaultCandidate: false
inheritEnvironment: false
were incorrectly indented, but I got a YAML parse error.

Custom DLX options in rabbitmq binder

I'm using spring-cloud-stream for communicating between microservices. I have the following predefined setup in the rabbit mq broker.
"first" -> exchange of type Topic which is bound to Queue (name="user.create",x-dead-letter-exchange="first.dlx")
"first.dlx" -> dead letter exchange of type Topic
and the following configuration file:
spring:
cloud:
stream:
bindings:
consumer-input:
group: user.create
destination: first
contentType: application/json
binder: rabbit
binders:
rabbit:
type: rabbit
rabbit:
bindings:
consumer-input:
consumer:
acknowledgeMode: manual
declareExchange: false
queueNameGroupOnly: true
bindQueue: false
deadLetterExchange: first.dlx
autoBindDlq: true
deadLetterRoutingKey: user.create.dlq
and when I start the application says :
[AMQP Connection 127.0.0.1:5672] ERROR o.s.a.r.c.CachingConnectionFactory - Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - inequivalent arg 'type' for exchange 'first.dlx' in vhost '/': received 'direct' but current is 'topic', class-id=40, method-id=10)
because rabbit mq try to declare dlx of type "direct". Here is the link of the repo.
so my question ... is there any way to tell rabbit mq to declare dlx of other type than "direct" something like property name: "deadLetterExchangeType: topic"? or not to declare dlx at all.
Any other suggestion will be very helpful.
It is not currently possible to define the DLX exchange type or prevent its declaration. Please open an issue against the binder.
Just allowing the type to be specified might not be enough, since it might have other incompatible arguments. We should probably add declareDlx, similar to declareExchange.

Categories