I am using java DSL to configure my channel adapters. The thing I want to achieve can be described with the following piece of code:
IntegrationFlows
.from(Jms.messageDriverChannelAdapter(mqCacheConnectionFactory)
.configureListenerContainer(container -> container.sessionTransacted(transacted))
.destinations(inputDestination1, inputDestination2) // missing method
.autoStartup(autoStartup)
.id(channelName)
.errorChannel(errorChannel)
)
.channel(commonChannel)
.get();
So I would like to have messageDriverChannelAdapter that would be capable of receiving from multiple JMS destinations. Is it achievable?
No, it isn't possible.
The Spring Integration JMS support is fully based on the Spring JMS foundation. And its AbstractMessageListenerContainer provides ability to consume only one destination. Therefore Jms.messageDriverChannelAdapter() doesn't provide an option to configure several destinations to listen to.
Only option you have is configure several Jms.messageDriverChannelAdapter()s. What is good with Spring Integration that you can output them all to the same MessageChannel and you won't have so much copy/paste hell.
Related
I am using Spring Cloud Stream and Kafka Binder to consume messages in batches from a Kafka Topic. I am trying to implement an error handling mechanism. As per my understanding I can't use Spring Cloud Stream's enableDLQ property in batch mode.
I have found RecoveringBatchErrorHandler and DeadLetterPublishingRecoverer to retry and send failure messages from the spring-kafka documentation. But I am not able to understand how to send the records to a custom DLQ topic following the functional programming standards. All the examples I can see is using KafkaTemplates.
Are there any good example where I can find the implementation?
This is the spring doc I have been referring to.
https://docs.spring.io/spring-kafka/docs/2.5.12.RELEASE/reference/html/#recovering-batch-eh
That version is no longer supported as OSS https://spring.io/projects/spring-kafka#support
With the current version, use the DefaultErrorHandler configured with a DeadLetterPublishingRecoverer and throw a BatchListenerExcecutionFailedException to tell the framework which record in the batch failed.
See https://docs.spring.io/spring-kafka/docs/current/reference/html/#annotation-error-handling and https://docs.spring.io/spring-kafka/docs/current/reference/html/#dead-letters and https://docs.spring.io/spring-kafka/docs/current/reference/html/#legacy-eh
Add a ListenerContainerCustomizer bean to add your configured error handler to the listener container.
In my application, there is need to start the inbound channel adapter message consumption. I came across xml based configuration but I am looking for Spring integration with Annotation or Java configuration to start / stop inbound channel adapter.
Can someone point out on how to do it ?
Take a look here, please: Spring Integration how to use Control Bus with JavaConfig, no DSL. That answer is related to your concern.
What you should do with your configuration is an #EndpointId alongside with the #InboundChannelAdapter in your Java config.
I am looking for a way to store system processes / tasks that the application will then execute according to the specified system-wide conditions. The point is that the system should check the input variables and trigger specific actions in the system according to the rules of the process.
I am probably looking for some form of meta-language that I can write rules / actions and that can be programmed to start and stop based on input system parameters.
In what format record such processes?
How to parse these jobs?
What design patterns apply to this?
Are there any existing solutions on this use-case?
Which Java libraries to use for this.
If anything is unclear, I will gladly complete the question.
Thank you.
You could try Spring Batch. It introduces it's own domain language for jobs and allows configure them both using XML or java.
Here are couple of examples from their reference guide.
XML config:
<job id="footballJob">
<step id="playerload" next="gameLoad"/>
<step id="gameLoad" next="playerSummarization"/>
<step id="playerSummarization"/>
</job>
Java config:
#Bean
public Job footballJob() {
return this.jobBuilderFactory.get("footballJob")
.start(playerLoad())
.next(gameLoad())
.next(playerSummarization())
.end()
.build();
}
Spring Batch provides functionality for repeating or retrying failed job steps as well as for parallel steps execution.
For some tasks also Apache Camel can be used. It although provides its own DSL and both XML and Java configuration options.
Both frameworks provide abstractions for description of sequence of actions, which should be done during the job. Apache Camel is more convenient for jobs which require some integration tasks (sending messages in JMS queues, calling REST- or Web- services, sending emails etc). Advantage of Spring Batch is ability to configure application behavior in case of an error or temporary inaccessibility of a service, which should be called (repeat / retry mechanisms). Both frameworks can be integrated with each other: you can call Spring Batch jobs from Apache Camel routes or initiate Apache Camel routes from Spring Batch jobs.
Most complicated solution would be usage of some BPMN engine (e.g. Camunda, Apache Activiti, jBPMN), but that probably would be an overkill.
I am writing a Java based Kafka Consumer application. I am utilizing kafka-clients, Spring Kafka and Spring boot for my application. While Spring boot lets me easily write Kafka Consumers (without really writing the ConcurrentKafkaListenerContainerFactory, ConsumerFactory etc), I want to be able to define / customize some of the properties for these consumers. However, I could not find out an easy way to do it using Spring boot. For eg: some of the properties that I would be interested in setting up are -
ConsumerConfig.MAX_PARTITION_FETCH_BYTES_CONFIG
ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG
I took a look at the Spring Boot pre-defined properties here.
Also, based on an earlier question here, I want to setup the concurrency on the consumers, but cannot find a configuration, application.properties driven way to do that using Spring Boot.
An obvious way is to define the ConcurrentKafkaListenerContainerFactory, ConsumerFactory classes again in my Spring Context and work from there. I wanted to understand if there is a cleaner way of doing that, especially since I am using Spring Boot.
Versions-
kafka-clients - 0.10.0.0-SASL
spring-kafka - 1.1.0.RELEASE
spring boot - 1.5.10.RELEASE
At the URL you cited, scroll down to
spring.kafka.listener.concurrency= # Number of threads to run in the listener containers.
spring-kafka - 1.1.0.RELEASE
I recommend upgrading to at least 1.3.5; it has a much simpler threading model, thanks to KIP-62.
EDIT
With Boot 2.0, you can set arbitrary producer, consumer, admin, common properties, as described in the boot documentation.
spring.kafka.consumer.properties.heartbeat.interval.ms
With Boot 1.5 there is only spring.kafka.properties as described here.
This sets the properties for both producers and consumers, but you may see some noise in the log about unused/unsupported properties for the producer.
Alternatively, you can simply override Boot's consumer factory and add properties as needed...
#Bean
public ConsumerFactory<?, ?> kafkaConsumerFactory(KafkaProperties properties) {
Map<String, Object> consumerProps = properties.buildConsumerProperties();
consumerProps.put(ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG, 5_000);
return new DefaultKafkaConsumerFactory<Object, Object>(consumerProps);
}
I googled for it and found https://github.com/spring-projects/spring-kafka/issues/604. The issue had been closed citing https://docs.spring.io/spring-boot/docs/2.0.0.RELEASE/reference/htmlsingle/#boot-features-kafka-extra-props. But is for spring boot version 2.0.
I managed to create simple Websocket application with Spring 4 and Stomp. See my last question here
Then I tried to use remote message broker(ActiveMQ). I just started the broker and changed
registry.enableSimpleBroker("/topic");
to
registry.enableStompBrokerRelay("/topic");
and it worked.
The question is how the broker is configured? I understand that in this case the application automagicaly finds the broker on localhost:defaultport, bu what if I need to point the app to some other broker on other machine?
The enableStompBrokerRelay method returns a convenient Registration instance that exposes a fluent API.
You can use this fluent API to configure your Broker relay:
registry.enableStompBrokerRelay("/topic").setRelayHost("host").setRelayPort("1234");
You can also configure various properties, like login/pass credentials for your broker, etc.
Same with XML Configuration:
<websocket:message-broker>
<websocket:stomp-endpoint path="/foo">
<websocket:handshake-handler ref="myHandler"/>
<websocket:sockjs/>
</websocket:stomp-endpoint>
<websocket:stomp-broker-relay prefix="/topic,/queue"
relay-host="relayhost" relay-port="1234"
client-login="clientlogin" client-passcode="clientpass"
system-login="syslogin" system-passcode="syspass"
heartbeat-send-interval="5000" heartbeat-receive-interval="5000"
virtual-host="example.org"/>
</websocket:message-broker>
See the StompBrokerRelayRegistration javadoc for more details on properties and default values.