This is my current spring amqp configuration
<rabbit:connection-factory id="rabbitConnectionFactory"
port="${rabbitmq.port}" host="${rabbitmq.host}" username="${rabbitmq.username}" password="${rabbitmq.password}"/>
<rabbit:admin id="rabbitmqAdmin" connection-factory="rabbitConnectionFactory" />
<rabbit:template id="importAmqpTemplate"
connection-factory="rabbitConnectionFactory">
</rabbit:template>
and this is my exchanges, queues, listeners, replyQueues, replyHandlers configuration
<rabbit:queue name="${process1.queue}" />
<rabbit:queue name="${process1.reply.queue}" />
<rabbit:queue name="${process2.queue}" />
<rabbit:queue name="${process2.reply.queue}" />
<rabbit:direct-exchange name="${myExchange}">
<rabbit:bindings>
<rabbit:binding queue="${process1.queue}"
key="${process1.routing.key}" />
<rabbit:binding queue="${process2.queue}"
key="${process2.routing.key}" />
</rabbit:bindings>
</rabbit:direct-exchange>
<rabbit:listener-container
connection-factory="rabbitConnectionFactory" concurrency="${my.listener.concurrency}"
requeue-rejected="false">
<rabbit:listener queues="${process1.queue}"
ref="foundation" method="process1" />
<rabbit:listener queues="${process2.queue}"
ref="foundation" method="process2s" />
</rabbit:listener-container>
<beans:beans profile="master">
<beans:bean id="process1Lbq" class="java.util.concurrent.LinkedBlockingQueue" />
<beans:bean id="process2Lbq" class="java.util.concurrent.LinkedBlockingQueue" />
<beans:bean id="process1sReplyHandler"
class="com.stockopedia.batch.foundation.ReplyHandler"
p:blockingQueue-ref="process1Lbq" />
<beans:bean id="process2ReplyHandler"
class="com.stockopedia.batch.foundation.ReplyHandler"
p:blockingQueue-ref="process2Lbq" />
<rabbit:listener-container
connection-factory="rabbitConnectionFactory" concurrency="1"
requeue-rejected="false">
<rabbit:listener queues="${process1.reply.queue}"
ref="process1sHandler" method="onMessage" />
<rabbit:listener queues="${process2.reply.queue}"
ref="process2ReplyHandler" method="onMessage" />
</rabbit:listener-container>
</beans:beans>
I have set this up on 6 different servers, and queuing up messages from master servers only. Other servers are only processing messages. All servers has same number of listeners running as set by concurrency.
The problem is, messages takes different time to process. Some messages take long time. So currently some of the servers do not pick up messages from queues even all listeners on those servers are done with processing there messages.
I can see the pending messages in queue to be processed and some servers just sitting idle. I want those server to pick up remaining messages while other servers are busy in processing their messages.
Do I need to set basic_Quos as mentioned in tutorial http://www.rabbitmq.com/tutorials/tutorial-two-java.html (Fair Dispatch) ?
int prefetchCount = 1;
channel.basicQos(prefetchCount);
or is it default for spring ampq ? If not how do i do it ?
basicQos(1) is the default setting for the listener container; it can be changed by setting prefetch on the container.
I can see the pending messages in queue to be processed and some servers just sitting idle.
You shouldn't see messages just sitting in the queue if you have idle consumers. If messages are marked as un-acked, they are being processed.
If you turn on DEBUG level logging, you will be able to see idle consumers polling an internal queue for new deliveries.
Related
I have requirement where i need to pass message to multiple channels asyc. To make my flow asyc i am using all executor channel. But for some reason flow is still sequential. i can seen diff thread as i configured in task executor but in sequence.
Here is the configuration I am using
<int:channel id="mainChannel">
<int:interceptors>
<int:wire-tap channel="channel1"/>
<int:wire-tap channel="channel2"/>
<int:wire-tap channel="channel3"/>
</int:interceptors>
</int:channel>
<int:channel id="channel1">
<int:dispatcher task-executor="exec1" />
</int:channel>
<int:channel id="channel2">
<int:dispatcher task-executor="exec2" />
</int:channel>
<int:channel id="channel3">
<int:dispatcher task-executor="exec3" />
</int:channel>
As per my understanding all this will be asyc (in my case 3 thread should run in parallel)
from log i can see all sequential but with diff thread name..
I am assuming preSend/Postsend should have been called in random order.
am i missing anything to make multiple executor channel in parallel.
I will really appreciate any help.
You might need to call the async implementation bean as shown:
<beans:bean id="asyncExecutor"
class="org.springframework.core.task.SimpleAsyncTaskExecutor"/>
<int:channel id="channel1">
<int:dispatcher task-executor="asyncExecutor" />
</int:channel>
<int:channel id="channel2">
<int:dispatcher task-executor="asyncExecutor" />
</int:channel>
<int:channel id="channel3">
<int:dispatcher task-executor="asyncExecutor" />
</int:channel>
Description of SimpleAsyncTaskExecutor:
public class SimpleAsyncTaskExecutor extends CustomizableThreadCreator
implements AsyncListenableTaskExecutor, Serializable
TaskExecutor implementation that fires up a new Thread for each task,
executing it asynchronously.
Supports limiting concurrent threads through the "concurrencyLimit"
bean property. By default, the number of concurrent threads is
unlimited.
NOTE: This implementation does not reuse threads! Consider a
thread-pooling TaskExecutor implementation instead, in particular for
executing a large number of short-lived tasks.
Example Of Usage from Github:
<?xml version="1.0" encoding="UTF-8"?>
<beans:beans xmlns="http://www.springframework.org/schema/integration"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:beans="http://www.springframework.org/schema/beans"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/integration
http://www.springframework.org/schema/integration/spring-integration.xsd">
<channel id="taskExecutorOnly">
<dispatcher task-executor="taskExecutor"/>
</channel>
<channel id="failoverFalse">
<dispatcher failover="false"/>
</channel>
<channel id="failoverTrue">
<dispatcher failover="true"/>
</channel>
<channel id="loadBalancerDisabled">
<dispatcher load-balancer="none"/>
</channel>
<channel id="loadBalancerDisabledAndTaskExecutor">
<dispatcher load-balancer="none" task-executor="taskExecutor"/>
</channel>
<channel id="roundRobinLoadBalancerAndTaskExecutor">
<dispatcher load-balancer="round-robin" task-executor="taskExecutor"/>
</channel>
<channel id="lbRefChannel">
<dispatcher load-balancer-ref="lb"/>
</channel>
<beans:bean id="taskExecutor"
class="org.springframework.core.task.SimpleAsyncTaskExecutor"/>
<beans:bean id="lb"
class="org.springframework.integration.channel.config.DispatchingChannelParserTests.SampleLoadBalancingStrategy"/>
</beans:beans>
from log i can see all sequential but with diff thread name
Because logs are just a single place where messages are printed and they really are printed by one writer even if from different thread. They are appear over there one by one. With a good load you would definitely see that messages are logged in an unexpected order.
I am assuming preSend/Postsend should have been called in random order.
That's not true. Interceptors are called in an order how they are added to the channel and if their order is the same, which is a case for you. It is already not an interceptor chain responsibility how those interceptors are implemented.
I think you just were not lucky to see logs in arbitrary order and probably just because consumers for those executor channels are plain loggers - no any loads to hold the thread and have an impression that work in other threads is done in parallel.
I am trying to build a spring integration application, which has the following configuration (the culprit seems to be the channel xsltSpecific) :
<beans:beans>
<channel id="channel1"></channel>
<channel id="channel2"></channel>
<channel id="xsltSpecific"></channel>
<channel id="xsltSpecificDelayed"></channel>
<channel id="xsltCommon"></channel>
<channel id="irdSpecificUnmarshallerChannel"></channel>
<channel id="irdSpecificInputChannel"></channel>
<file:outbound-channel-adapter
directory="${dml.ird.directory}" channel="channel1"
auto-create-directory="true" filename-generator="timestampedFileNameGenerator">
</file:outbound-channel-adapter>
<recipient-list-router input-channel="fileChannel">
<recipient channel="channel1" selector-expression="${dml.data.logs.enable}" />
<recipient channel="channel2" />
</recipient-list-router>
<recipient-list-router input-channel="channel2">
<recipient channel="xsltSpecificDelayed"></recipient>
<recipient channel="xsltCommon"></recipient>
</recipient-list-router>
<delayer id="specificDelayer" input-channel="xsltSpecificDelayed" default-delay="5000" output-channel="xsltSpecific"/>
<jms:message-driven-channel-adapter
id="jmsInboundAdapterIrd" destination="jmsInputQueue" channel="fileChannel"
acknowledge="transacted" transaction-manager="transactionManager"
error-channel="errorChannel" client-id="${ibm.jms.connection.factory.client.id}"
subscription-durable="true" durable-subscription-name="${ibm.jms.subscription.id1}" />
<si-xml:xslt-transformer input-channel="xsltCommon" output-channel="jmsInputChannel"
xsl-resource="classpath:summit-hub-to-cpm-mapping.xsl" result-transformer="resultTransformer" >
</si-xml:xslt-transformer>
<si-xml:xslt-transformer input-channel="xsltSpecific" output-channel="irdSpecificUnmarshallerChannel"
xsl-resource="classpath:summit-hub-specific.xsl" result-transformer="resultTransformer" >
</si-xml:xslt-transformer>
<si-xml:unmarshalling-transformer id="irdUnmarshaller"
unmarshaller="irdUnmarshallerDelegate" input-channel="irdSpecificUnmarshallerChannel"
output-channel="saveSpecificTradeChannel" />
<beans:bean id="irdUnmarshallerDelegate"
class="org.springframework.oxm.jaxb.Jaxb2Marshaller">
<beans:property name="schema"
value="summit-hub-specific.xsd" />
<beans:property name="contextPath"
value="com.h.i.c.d.i.mapping" />
</beans:bean>
<beans:bean id="resultTransformer" class="org.springframework.integration.xml.transformer.ResultToStringTransformer" />
<service-activator ref="specificTradeService" input-channel="saveSpecificTradeChannel"
requires-reply="false" method="save"/>
<file:inbound-channel-adapter directory="${dml.retry.directoryForIrd}"
channel="fileChannelAfterRetry" auto-create-directory="true"
prevent-duplicates="false" filename-regex=".*\.(msg|xml)" queue-size="50" >
<poller fixed-delay="${dml.retry.delay}" max-messages-per-poll="50">
<transactional transaction-manager="transactionManager" />
</poller>
</file:inbound-channel-adapter>
<channel id="fileChannel"/>
<channel id="fileChannelAfterRetry"/>
<file:file-to-string-transformer
input-channel="fileChannelAfterRetry" output-channel="fileChannel"
delete-files="true" />
<beans:import resource="classpath:cpm-dml-common-main.xml" />
</beans:beans>
But I am having the following exception :
org.springframework.messaging.MessageDeliveryException: Dispatcher has no subscribers for channel 'org.springframework.context.support.GenericApplicationContext#6950e31.xsltSpecific'.; nested exception is org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers
What does this exception mean ?
Also, I am not able to spot the problem, can you help me fix this issue ?
UPDATE
Sorry, I didn't give the whole context earlier, because I didn't think it was relevant.
The exception arises during a test derived from AbstractTransactionalJUnit4SpringContextTests, which closed the application context at the end of the test, before the message had a chance to get to the end.
I've added a Thread.sleep(10000) at the end of the test, and the exception doesn't happen anymore.
The xsltSpecific is just a default DirectChannel with a UnicastingDispatcher to deliver messages to channel's subscribers.
According your configuration you send a message to this channel from the:
<delayer id="specificDelayer" input-channel="xsltSpecificDelayed" default-delay="5000" output-channel="xsltSpecific"/>
And also it looks like you really have a subscriber to this channel:
<si-xml:xslt-transformer input-channel="xsltSpecific" output-channel="irdSpecificUnmarshallerChannel"
xsl-resource="classpath:summit-hub-specific.xsl" result-transformer="resultTransformer" >
</si-xml:xslt-transformer>
What is really not clear when this defined subscriber is lost. It doesn't look like you have an auto-startup="false" on this endpoint, but on the other hand maybe you really stop it at runtime...
Would you mind to share more stack trace on the matter? I want to see who is an original caller for that lost message.
Spring integration tcp gateway can be setup as follows:
<!-- Server side -->
<int-ip:tcp-connection-factory id="crLfServer"
type="server"
port="${availableServerSocket}"/>
<int-ip:tcp-inbound-gateway id="gatewayCrLf"
connection-factory="crLfServer"
request-channel="serverBytes2StringChannel"
error-channel="errorChannel"
reply-timeout="10000" />
<int:channel id="toSA" />
<int:service-activator input-channel="toSA"
ref="echoService"
method="test"/>
<bean id="echoService"
class="org.springframework.integration.samples.tcpclientserver.EchoService" />
<int:object-to-string-transformer id="serverBytes2String"
input-channel="serverBytes2StringChannel"
output-channel="toSA"/>
<int:transformer id="errorHandler"
input-channel="errorChannel"
expression="Error processing payload"/>
Notice the reply-timeout which is set as 10 seconds.
Does it mean that the TCP server will call the service and can wait for a maximum of 10 seconds? If the service does not reply within 10 seconds, Does the TCP server will send the message to errorChannel which in turn sends the client error message "Error processing payload"?
When I tested the TCP Server with a service that takes 20 seconds, client is taking 20 seconds to get the response. I am not seeing error message.
Can you please help in understanding the reply-timeout in TCP inbound-gateway?
Thanks
UPDATE:
Thanks for Artem to help out with this issue.
Best way to solve this problem is with the following config:
<beans>
<int-ip:tcp-connection-factory id="crLfServer" type="server" port="${availableServerSocket}"/>
<int-ip:tcp-inbound-gateway id="gatewayCrLf" connection-factory="crLfServer" request-channel="requestChannel" error-channel="errorChannel" reply-timeout="5000" />
<int:service-activator input-channel="requestChannel" ref="gateway" requires-reply="true"/>
<int:gateway id="gateway" default-request-channel="timeoutChannel" default-reply-timeout="5000" />
<int:object-to-string-transformer id="serverBytes2String" input-channel="timeoutChannel" output-channel="serviceChannel"/>
<int:channel id="timeoutChannel">
<int:dispatcher task-executor="executor"/>
</int:channel>
<bean id="executor" class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor">
<property name="corePoolSize" value="5" />
<property name="maxPoolSize" value="10" />
<property name="queueCapacity" value="25" />
</bean>
<int:service-activator input-channel="serviceChannel" ref="echoService" method="test"/>
<bean id="echoService" class="org.springframework.integration.samples.tcpclientserver.EchoService" />
<int:transformer id="errorHandler" input-channel="errorChannel" expression="payload.failedMessage.payload + ' errorHandleMsg: may be timeout error'"/>
</beans>
Thanks
Well, actually we should on that attribute a description like we have in other similar places, e.g. HTTP Inbound Gateway:
<xsd:attribute name="reply-timeout" type="xsd:string">
<xsd:annotation>
<xsd:documentation><![CDATA[
Used to set the receiveTimeout on the underlying MessagingTemplate instance
(org.springframework.integration.core.MessagingTemplate) for receiving messages
from the reply channel. If not specified this property will default to "1000"
(1 second).
]]></xsd:documentation>
</xsd:annotation>
</xsd:attribute>
That timeout means how much to wait for reply from downstream flow. But! That is possible if you flow is shifted to another thread somewhere. Otherwise everything is performed in the caller's Thread and therefore the wait time isn't deterministic.
Anyway we return null there after timeout without reply. And it is reflected in the TcpInboundGateway:
Message<?> reply = this.sendAndReceiveMessage(message);
if (reply == null) {
if (logger.isDebugEnabled()) {
logger.debug("null reply received for " + message + " nothing to send");
}
return false;
}
We can reconsider a logic in the TcpInboundGateway for :
if (reply == null && this.errorOnTimeout) {
if (object instanceof Message) {
error = new MessageTimeoutException((Message<?>) object, "No reply received within timeout");
}
else {
error = new MessageTimeoutException("No reply received within timeout");
}
}
But seems for me it really would be better on to rely on the timeout from the client.
UPDATE
I think we can overcome the limitation and meet you requirements with the midflow <gateway>:
<gateway id="gateway" default-request-channel="timeoutChannel" default-reply-timeout="10000"/>
<channel id="timeoutChannel">
<dispatcher task-executor="executor"/>
</channel>
<service-activator input-channel="requestChannel"
ref="gateway"
requires-reply="true"/>
So, the <service-activator> calls <gateway> and waits for reply from there. Requiring the last one, of course, to end up with the ReplyRequiredException, which you can convert into desired MessageTimeoutException in your error flow on the error-channel="errorChannel".
The timeoutChannel is an executor one, making our default-reply-timeout="10000" very useful because we shift a message on the gateway into separate thread immediately and move right from there into reply waiting process wrapped with that timeout on the CountDonwLatch.
Hope that is clear.
I'm using ActiveMQ's compositeTopic to fan-out messages to multiple destinations like this:
<broker>
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
...
<destinationInterceptors>
<virtualDestinationInterceptor>
<virtualDestinations>
<compositeTopic name="fan-out" forwardOnly="true">
<forwardTo>
<queue physicalName="persistent"/>
<queue physicalName="ephemeral"/>
</forwardTo>
</compositeTopic>
</virtualDestinations>
</virtualDestinationInterceptor>
</destinationInterceptors>
</broker>
So, I want to forward messages to both persistent and ephemeral queues at the same time. As you might guess from their names, I want messages in persistent queue to be persistent and I do not need persistence for ephemeral queue. The problem is that ActiveMQ doesn't have a concept of a persistence on a per destination basis, does it? One can set persistence for a whole broker, or use persistence / non-persistence delivery modes. So, the question is: how can I disable persistence for ephemeral queue in this case?
So, the solution that seems to work is to use Apache Camel with ActiveMQ. Just add a route that drains ephemeral queue to another queue setting TTL / persistence mode in process:
<broker>
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
...
<destinationInterceptors>
<virtualDestinationInterceptor>
<virtualDestinations>
<compositeTopic name="fan-out" forwardOnly="true">
<forwardTo>
<queue physicalName="persistent"/>
<queue physicalName="ephemeral"/>
</forwardTo>
</compositeTopic>
</virtualDestinations>
</virtualDestinationInterceptor>
</destinationInterceptors>
</broker>
<camelContext xmlns="http://camel.apache.org/schema/spring" id="camel">
<route>
<from uri="activemq:queue:ephemeral"/>
<to uri="activemq:queue:ephemeral-backend?timeToLive=10000"/>
</route>
</camelContext>
timeToLive is message's TTL in milliseconds. In the config above messages are still persistent: after TTL expires they are moved to DLQ. If you want to throw them away then the config should include deliveryPersistent set to false:
<camelContext xmlns="http://camel.apache.org/schema/spring" id="camel">
<route>
<from uri="activemq:queue:ephemeral" />
<to uri="activemq:queue:ephemeral-backend?timeToLive=10000&deliveryPersistent=false" />
</route>
</camelContext>
I have a configured spring integration pipeline where xml files are parsed into various objects. The objects are going through several channel endpoints where they are slightly modified - nothing special, just some properties added.
The last endpoint from the pipeline is the persister, where the objects are persisted in DB. There might be duplicates, so in this endpoint there is also a check whether the object is already persisted or its a new one.
I use a message driven architecture, with simple direct channels.
<int:channel id="parsedObjects1" />
<int:channel id="parsedObjects2" />
<int:channel id="processedObjects" />
<int:service-activator input-channel="parsedObjects1" ref="processor1" method="process" />
<int:service-activator input-channel="parsedObjects2" ref="processor2" method="process" />
<int:service-activator input-channel="processedObjects" ref="persister" method="persist" />
In the moment there is only one data source, from where I get xml files, and everything is going smoothly. The problems begin when I need to attach a second data source. The files are coming in the same time so I want them processed in parallel. So, I've placed two parser instances, and every parser is sending messages through the pipeline.
The configuration with the direct channels that I have creates concurrency problems, so I've tried modifying it. I've tried several configuration from spring integration documentation, but so far with no success.
I've tried with dispatcher configured with max pool size of 1 - one thread per message in every channel endpoint.
<task:executor id="channelTaskExecutor" pool-size="1-1" keep-alive="10" rejection-policy="CALLER_RUNS" queue-capacity="1" />
<int:channel id="parsedObjects1" >
<int:dispatcher task-executor="channelTaskExecutor" />
</int:channel>
<int:channel id="parsedObjects2" >
<int:dispatcher task-executor="channelTaskExecutor" />
</int:channel>
<int:channel id="processedObjects" >
<int:dispatcher task-executor="channelTaskExecutor" />
</int:channel>
I have tried the queue-poller configuration also:
<task:executor id="channelTaskExecutor" pool-size="1-1" keep-alive="10" rejection-policy="CALLER_RUNS" queue-capacity="1" />
<int:channel id="parsedObjects1" >
<int:rendezvous-queue/>
</int:channel>
<int:channel id="parsedObjects2" >
<int:rendezvous-queue/>
</int:channel>
<int:channel id="processedObjects" >
<int:rendezvous-queue/>
</int:channel>
<int:service-activator input-channel="parsedObjects1" ref="processor1" method="process" >
<int:poller task-executor="channelTaskExecutor" max-messages-per-poll="1" fixed-rate="2" />
</int:service-activator>
<int:service-activator input-channel="parsedObjects2" ref="processor2" method="process" >
<int:poller task-executor="channelTaskExecutor" max-messages-per-poll="1" fixed-rate="2" />
</int:service-activator>
<int:service-activator input-channel="processedObjects" ref="persister" method="persist" >
<int:poller task-executor="channelTaskExecutor" max-messages-per-poll="1" fixed-rate="2" />
</int:service-activator>
Basically, I want to get rid of any race conditions in the channel endpoints - in my case in the persister. The persister channel endpoint should block for every message, because if it runs in parallel, I get many duplicates persisted in the DB.
EDIT:
After some debugging I've done, it seems that the problems are in the endpoints logic rather than the configuration. Some of the objects which are sent through the pipeline to the persister, are also stored in a local cache until parsing of the file is done - they are later sent through the pipeline as well to persist some join tables as a part of some other domain entities. It happens that with the above configurations, some of the objects were not yet persisted when they are sent for the second time in the pipeline, so at the end I get duplicates in the DB.
I'm fairly new at spring integration, so probably at this point I will ask more general questions. In a setup with multiple data sources - meaning multiple instances of parsers etc:
Is there a common way (best way) to go to configure the pipeline to enable parallelization?
If there is need, is there a way to serialize the message handling?
Any suggestions are welcomed. Thanks in advance.
First, can you describe what the "concurrency problems" are? Ideally you would not need to serialize the message handling, so that would be a good place to start.
Second, the thread pool as you've configured it will not completely serialize. You will have 1 thread available in the pool but the rejection policy you've chosen leads to a caller thread running the task itself (basically throttling) in the case that the queue is at capacity. That means you will get a caller-run thread concurrently with the one from the pool.
The best way that I can think of for your scenario would be along these lines:
Make your parsedObject1 and parsedObject2 be normal queue channels, the capacity of the queue can be set appropriately (say 25 at any time):
<int:channel id="parsedObjects1" >
<int:queue />
</int:channel>
Now at this point your xml processors against the 2 channels - parsedObjects1 and parsedObjects2, will process the xml's and should output to the processedObjects channel. You can use the configuration similar to what you have for this, except that I have explicitly specified the processedObjects channel -:
<int:service-activator input-channel="parsedObjects1" ref="processor1" method="process" output-channel="processedObjects">
<int:poller task-executor="channelTaskExecutor"/>
</int:service-activator>
The third step is where I will deviate from your configuration, at this point you said you want to serialize the persistence, the best way would be to do it through a DIFFERENT task executor with a pool size of 1, this way only 1 instance of your persister is running at any point in time:
<task:executor id="persisterpool" pool-size="1"/>
<int:service-activator input-channel="processedObjects" ref="persister" method="persist" >
<int:poller task-executor="persisterpool" fixed-delay="2"/>
</int:service-activator>
I managed to get the pipeline working. I'm not sure if I'll keep the current configuration, or experiment some more, but for now, this is the configuration I ended up with:
<task:executor id="channelTaskExecutor" pool-size="1-1" keep-alive="10" rejection-policy="CALLER_RUNS" queue-capacity="1" />
<int:channel id="parsedObjects1" >
<int:queue capacity="1000" />
</int:channel>
<int:channel id="parsedObjects2" >
<int:queue capacity="1000" />
</int:channel>
<int:channel id="processedObjects" >
<int:queue capacity="1000" />
</int:channel>
<int:service-activator input-channel="parsedObjects1" ref="processor1" method="process" >
<int:poller task-executor="channelTaskExecutor" max-messages-per-poll="100" fixed-rate="2" />
</int:service-activator>
<int:service-activator input-channel="parsedObjects2" ref="processor2" method="process" >
<int:poller task-executor="channelTaskExecutor" max-messages-per-poll="100" fixed-rate="2" />
</int:service-activator>
<int:service-activator input-channel="processedObjects" ref="persister" method="persist" >
<int:poller task-executor="channelTaskExecutor" max-messages-per-poll="1" fixed-rate="2" />
</int:service-activator>