I have a requirement of processing the same information of a request in two different ways asynchronously. I am using spring integration in my project.
Can I have two service activators reading from the same input channel as below? I will get my data from a queue through an adapter and forwarded to the channel.
<int:channel id="dataChannel" />
<service-activator input-channel="dataChannel" ref="serviceA" method="method1">
<service-activator input-channel="dataChannel" ref="serviceB" method="method2">
<bean id="serviceA" class="a.b.test.ServiceA">
<bean id="serviceB" class="a.b.test.ServiceB">
Change dataChannel to
<int:publish-subscribe-channel id=`dataChannel` task-executor="exec" />
(declare a <task:executor ... />).
Related
I want to make an interceptor that will intercept every request except that related to login. The problem that I have is the interceptor still intercepts the requests that I provided with exclude-mapping. But the exclude-mapping is not working.
Here is the configuration,spring 4.3:
<mvc:interceptors>
<beans:bean class="com.knowledge.filter.GlobalInterceptor" />
<mvc:interceptor>
<mvc:mapping path="/back" />
<mvc:exclude-mapping path="/back/login" />
<beans:bean class="com.knowledge.filter.LoginInterceptor" />
</mvc:interceptor>
</mvc:interceptors>
In my opinion, the "/back/login" should not be intercepted. Actually still get into the class of the interceptor.So, do i make some mistakes?
I recommend writing separate mapping for every individual path instead of grouping it until and unless you have common implementation for all the services. Excluding will be possible if login and back are separate rest controllers, Not one being child of other.
You can refer this example for spring 4.
http://www.kscodes.com/spring-mvc/spring-mvc-interceptor-example/
I have the following XML configuration for a Kafka outbound channel adapter:
<int-kafka:outbound-channel-adapter id="kafkaOutboundChannelAdapter"
kafka-producer-context-ref="kafkaProducerContext"
auto-startup="true"
channel="activityOutputChannel">
<int:poller fixed-delay="1000" time-unit="MILLISECONDS" receive-timeout="0" task-executor="taskExecutor"/>
</int-kafka:outbound-channel-adapter>
<task:executor id="taskExecutor"
pool-size="5-25"
queue-capacity="20"
keep-alive="120"/>
This works just fine. I am trying to replicate this in the Java DSL, but I can't get too far. So far, I just have this:
.handle(Kafka.outboundChannelAdapter(kafkaConfig)
.addProducer(producerMetadata, brokerAddress)
.get());
I can't figure out how to add the taskExecutor and the poller with the DSL.
Any insight on how to incorporate these into my overall IntegrationFlow is appreciated.
The Spring Integration components (e.g. <int-kafka:outbound-channel-adapter>) consist with two beans: AbstractEndpoint to accept messages from the input-channel and MessageHandler to handle message.
So, Kafka.outboundChannelAdapter() is about MessageHandler. Any other endpoint-specific properties are up to the second Consumer<GenericEndpointSpec<H>> endpointConfigurer argument of .handle() EIP-method:
.handle(Kafka.outboundChannelAdapter(kafkaConfig)
.addProducer(producerMetadata, brokerAddress),
e -> e.id("kafkaOutboundChannelAdapter")
.poller(p -> p.fixedDelay(1000, TimeUnit.MILLISECONDS)
.receiveTimeout(0)
.taskExecutor(this.taskExecutor)));
See Reference Manual for more information.
I've been playing with the ImapIdleChannelAdapter integrated with Spring for a few.. and noticed that it starts 10 task-scheduler threads.
Mostly I have been checking the documentation for the ImapIdleChannelAdapter, but was not able to find a way to config how many threads it will start when listening to a email inbox.
Here is my Spring config:
<int:channel id="receiveChannel" >
<int:dispatcher task-executor="threadPool" />
</int:channel>
<int-mail:imap-idle-channel-adapter id="imapAdapter"
store-uri="imaps://#{systemProperties['imaps.encoded.username']}:#{systemProperties['imaps.encoded.password']}##{systemProperties['imaps.host']}:#{systemProperties['imaps.port']}/INBOX"
channel="receiveChannel" auto-startup="true" should-delete-messages="false" should-mark-messages-as-read="false"
java-mail-properties="javaMailProperties">
</int-mail:imap-idle-channel-adapter>
Thanks for the help.
by setting the number of threads in your executor -- it's the configuration of your "threadPool" bean not, of the imapAdapter itself -- though you can further configure the threading of the imapAdapter with setSendingTaskExecutor() and setTaskScheduler().
I have a spring app which listens on rabbitmq and processing the message, when the app runs, I will register on a hub database to note that one instance is running, during the registering if another instance is already running, then I need to quit app without init the rabbit connection/queue and listener, otherwise some message could be wrongly consumed. How to do that, I know there are callback when init a bean. So I should create bean and before init it, checking whether another instance is running, but how to make sure this bean will be init before other beans? But I also need to init database source bean before the checking bean, otherwise it can't use the database defined in configuration.
<rabbit:connection-factory id="connectionFactory" host="localhost" username="guest" password="guest" />
<rabbit:admin id="containerAdmin" connection-factory="connectionFactory" />
<rabbit:queue id="ETLQueue" name="ETLQueue" />
<rabbit:direct-exchange id="myExchange" name="ETL">
<rabbit:bindings>
<!--if doens't specifiy key here, the by default the key will be the same as queue name, and then need to send message with the correct key,
otherwise the listener can't receive the mssage-->
<rabbit:binding queue="ETLQueue" key="ETLQueue" />
</rabbit:bindings>
</rabbit:direct-exchange>
<bean id="aListener" class="com.testcom.amqp.listener.ConnectorListener" c:dataSource-ref="dataSource"/>
<!--concurrency will set how many threads running concurrently to consume the messsage, the code need to be thread safe-->
<rabbit:listener-container id="myListenerContainer" connection-factory="connectionFactory" prefetch="1" concurrency="10">
<rabbit:listener ref="aListener" queues="ETLQueue" />
</rabbit:listener-container>
As you are using xml configuration, the attribute depends-on normally helps to say that a bean should not be initialized before the target bean :
<bean id="A" .../>
<bean id="B" ... depends-on="A"/>
Bean B should be initialized after A (unless there are some cyclic dependency in which case spring only does its best ...)
I have a heavy loaded (allot of external network calls) integration flow, which uses PriorityQueue before entering main Service Activator. I want to add executor channel to improve the system load, but I see no straight forward ways to combine those channels.
<int:channel id="monitorInPriorityUpdate">
<int:priority-queue/>
</int:channel>
<int:transformer id="monitorLogTransformerStub"
input-channel="monitorInPriorityUpdate" output-channel="monitorInUpdate"
expression="payload" />
<int:channel id="monitorInUpdate">
<int:dispatcher task-executor="monitorExecutor"/>
</int:channel>
I needed to create 2 additional components, to make this work, but is there a way to combine few Spring Integration Channels in one, without adding new components?
Actually, looks like not ebough info. But I try to guess. You need this:
<int:channel id="priorityChannel">
<int:priority-queue/>
</int:channel>
<int:bridge input-channel="priorityChannel" output-channel="executorChannel">
<int:poller fixed-rate="100"/>
</int:bridge>
<int:channel id="executorChannel">
<int:dispatcher task-executor="threadPoolExecutor"/>
</int:channel>
Here you use a Bridge to shift messages from one channel to another one.
OR this:
<int:channel id="priorityChannel">
<int:priority-queue/>
</int:channel>
<int:service-activator input-channel="priorityChannel" ref="service">
<int:poller fixed-rate="100" task-executor="threadPoolExecutor"/>
</int:service-activator>
Here you just place your messages from priorityChannel to taskExecutor using Poller.
It is abnormal to mix concerns in one channel. Each channel type plays his own concreate role.
What you want to achieve is not just minimize typing, but even if that happen to be a solution for you, it would be very complex and not robust.