I am using Spring cloud stream with RabbitMQ.
I want to be able to configure message and query properties from source code and not from a property file (as they mention in their docs).
For example with the classic Java Client for RabbitMq i can do something like that to create a queue with the properties i want:
//qName, passive, durable, exclusive auto-delete
channel.queueDeclare("myQueue", true, false, false, , false , null);
Any ideas on how can i achieve the same thing using Spring cloud stream?
Inside of "application.yml" you can add all this values , following is example
spring:
cloud:
stream:
instance-count: 1
bindings:
input:
consumer:
concurrency: 2
maxAttempts: 1
group: geode-sink
destination: jdbc-event-result
binder: rabbit
rabbit:
bindings:
input:
consumer:
autoBindDlq: true
republishToDlq: true
requeueRejected: false
rabbitmq:
username: ur-user-name
password: ur-password
host: rabbitmq-url-replace-here
port: 5672
datasource:
platform: mysql
url: jdbc:mysql-url-replace-here
username: ur-user-name
password: ur-password
driverClassName: com.mysql.jdbc.Driver
datasource:
tomcat:
max-wait: 300
min-idle: 10
max-idle: 100
aggregator:
groupCount: 2
batchSize: 1000
batchTimeout: 1000
Updated :
https://cloud.spring.io/spring-cloud-static/spring-cloud-stream-binder-rabbit/2.2.0.M1/spring-cloud-stream-binder-rabbit.html
https://github.com/spring-projects/spring-xd/blob/master/spring-xd-dirt/src/main/resources/application.yml
After digging in their documentation and with the help of #vaquar khan I found out that the only way to do it is from your property file.
application.yml
spring:
cloud:
stream:
bindings:
queue_name :
destination: queue_name
group: your_group_name
durable-subscription : true
This will declare a durable, non-deleting and non-exclusive queue.
Related
I'm struggling to find in google/spring docs any way to make the Spring Container properties set in the yml file instead of programatically.
I want to set the property "idleBetweenPolls" for one specific topic + consumer.
I've achieved it programatically (it is currently applying to all topics/consumers, I'd need to add some conditions there).
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer() {
return (container, dest, group) -> {
log.info("Container : {}, dest: {}, group: {}", container, dest, group);
container.getContainerProperties().setIdleBetweenPolls(15000);
};
}
How can I set that at yml level? I tried the config below with no success:
spring.cloud.stream:
kafka:
binder:
autoCreateTopics: true
autoAddPartitions: true
healthTimeout: 10
requiredAcks: 1
minPartitionCount: 1
replicationFactor: 1
headerMapperBeanName: customHeaderMapper
bindings:
command-my-setup-input-channel:
consumer:
autoCommitOffset: false
batch-mode: true
startOffset: earliest
resetOffsets: true
converter-bean-name: batchConverter
ackMode: manual
idleBetweenPolls: 90000 # not working
configuration:
heartbeat.interval.ms: 1000
max.poll.records: 2
max.poll.interval.ms: 890000
value.deserializer: com.xpto.MySetupDTODeserializer
bindings:
command-my-setup-input-channel:
destination: command.my.setup
content-type: application/json
binder: kafka
configuration:
value:
deserializer: com.xpto.MySetupDTODeserializer
consumer:
batch-mode: true
startOffset: earliest
resetOffsets: true
Version: spring-cloud-stream 3.0.12.RELEASE
Boot does not support all container properties in yml, just some of the common ones; your current solution is correct.
You could open a new feature suggestion against Boot.
We are following microservice architecture. Let's say we have two MS.
1st MS has the below configuration
spring:
cloud:
stream:
bindings:
check:
destination: checkAndTest
check2:
destination: checkAndTest2
2nd MS does not have such a configuration.
Now the problem is that for the 1st MS "springCloudBus" exchange is not getting created.
The only difference which I found was this specific configuration only.
And for this reason when we hit /actuator/busrefresh the call does not go to all instances of 1st MS.
I am using "spring-cloud-stream-binder-kafka" for my consumer, having AVRO topic.
it's new consumer with new consumer group. After running the application I am getting this log "Found no committed offset for partition 'topic-name-x'". I read that it is expected to get this log for new consumer groups but even after that it's not consuming any messages.
Having below config for consumer:
spring:
cloud:
function:
definition: input
stream:
bindings:
input-in-0:
destination: topic-name
group: group-name
kafka:
binder:
autoCreateTopics: false
brokers: broker-server
configuration:
security.protocol: SSL
ssl.truststore.type: JKS
ssl.truststore.location:
ssl.truststore.password:
ssl.keystore.type: JKS
ssl.keystore.location:
ssl.keystore.password:
ssl.key.password:
request.timeout.ms:
max.request.size:
consumerProperties:
key.deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
value.deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
schema.registry.url: url
basic.auth.credentials.source: USER_INFO
basic.auth.user.info: ${AUTH_USER}:${AUTH_USER_PASS}
specific.avro.reader: true
spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
spring.deserializer.value.delegate.class: io.confluent.kafka.serializers.KafkaAvroDeserializer
bindings:
input-in-0:
consumer:
autoCommitOffset: false
Why it's not able to consumer the messages??? I tried setting resetOffsets: true, startOffset: earliest. But still no luck.
I'm trying to set up RabbitMQ with Spring Cloud Stream Support
I have a couple consumers and producers. One of the producers should produce messages to a separate virtual host on a same RabbitMQ instance (later it might be different physical instances).
application.yaml
spring:
cloud:
stream:
binders:
binder1:
type: rabbit
defaultCandidate: false
inheritEnvironment: false
environment:
spring:
rabbitmq:
host: localhost
port: 5672
virtual-host: virtual-host-1
username: guest
password: guest
binder2:
type: rabbit
defaultCandidate: false
inheritEnvironment: false
environment:
spring:
rabbitmq:
host: localhost
port: 5672
virtual-host: virtual-host-2
username: guest
password: guest
bindings:
test:
binder: binder1
coordinates:
destination: coordinates
binder: binder1
events:
destination: events
binder: binder1
events_output:
destination: events
binder: binder1
tasks:
destination: tasks
binder: binder2
The goal is that binding tasks should use vhost virtual-host-2. Other bindings should use vhost virtual-host-1.
However binder value seems to be ignored and default rabbit binder is taken into account with default settings on application startup.
I noticed it while debugging the runtime:
The binder value on each binding is NULL. Although the value is explicitly provided in properties.
If I set defaultCandidate of any of the binders to true then that binder settings will be used as a replacement for default one.
Is something misconfigured?
This is one of the reasons why I don't like yaml. It's hard to track what may be misconfigured. In any event, here is the working example I just tried.
spring:
cloud:
stream:
bindings:
input:
binder: rabbit1
group: vhost1-group
destination: vhost1-queue
output:
binder: rabbit2
destination: vhost2-queue
binders:
rabbit1:
type: rabbit
environment:
spring:
rabbitmq:
host: localhost
virtual-host: vhost1
rabbit2:
type: rabbit
environment:
spring:
rabbitmq:
host: localhost
virtual-host: vhost2
I just copied/pasted your config; fixed some indentation and it worked fine for me...
spring:
cloud:
stream:
binders:
binder1:
type: rabbit
defaultCandidate: false
inheritEnvironment: false
environment:
spring:
rabbitmq:
host: localhost
port: 5672
virtual-host: virtual-host-1
username: guest
password: guest
binder2:
type: rabbit
defaultCandidate: false
inheritEnvironment: false
environment:
spring:
rabbitmq:
host: localhost
port: 5672
virtual-host: virtual-host-2
username: guest
password: guest
bindings:
test:
binder: binder1
tasks:
destination: tasks
binder: binder2
#SpringBootApplication
#EnableBinding(Foo.class)
public class So56462671Application {
public static void main(String[] args) {
SpringApplication.run(So56462671Application.class, args);
}
}
interface Foo {
#Input
MessageChannel test();
#Input
MessageChannel tasks();
}
and
defaultCandidate: false
inheritEnvironment: false
were incorrectly indented, but I got a YAML parse error.
Im building a app that with spring boot that uses mysql as db, as im seen is that spring boot opens 10 connections with the db but uses only one.
Every time i run show processlist on db, 9 connections are sleep and only one is doing something.
There's a way to split all the process between the 10 opened connections?
My app need better mysql processing because every minute about 300 records is inserted, so i think spliting between this opened connections will get better results.
My aplication.yml:
security:
basic:
enabled: false
server:
context-path: /invest/
compression:
enabled: true
mime-types:
- application/json,application/xml,text/html,text/xml,text/plain
spring:
jackson:
default-property-inclusion: non-null
serialization:
write-bigdecimal-as-plain: true
deserialization:
read-enums-using-to-string: true
datasource:
platform: MYSQL
url: jdbc:mysql://localhost:3306/invest?useSSL=true
username: #
password: #
separator: $$
dbcp2:
test-while-idle: true
validation-query: SELECT 1
jpa:
show-sql: false
hibernate:
ddl-auto: update
naming:
strategy: org.hibernate.cfg.ImprovedNamingStrategy
properties:
hibernate:
dialect: org.hibernate.dialect.MySQL5Dialect
http:
encoding:
charset: UTF-8
enabled: true
force: true
There's a way to do this?
You can check my.ini,[mysqld] max_connections ,make sure more connections are allowed by your mysql.
You can adjust these config in your application.yml
spring.datasource.max-active
spring.datasource.max-age
spring.datasource.max-idle
spring.datasource.max-lifetime
spring.datasource.max-open-prepared-statements
spring.datasource.max-wait
spring.datasource.maximum-pool-size
spring.datasource.min-evictable-idle-time-millis
spring.datasource.min-idle
After all, I do not think this is a problem . Mysql can handle 1000+ insertion per second. 300 per minute is too few for pool to open more than one connection.