I want to collect some messages(lets say 10) and pass them as a list to the service activator instead of passing them one by one.
The context:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns=...>
<int:channel id="ch.http.in"/>
<int:channel id="ch.http.trans"/>
<int:channel id="ch.http.aggr"/>
<int-http:inbound-channel-adapter path="test" channel="ch.http.in"/>
<int:map-to-object-transformer input-channel="ch.http.in" output-channel="ch.http.trans" type="demo.Req"/>
<int:aggregator
input-channel="ch.http.trans"
output-channel="ch.http.aggr"
release-strategy-expression="size() == 10"
correlation-strategy-expression="headers['id']"
ref="aggr" method="add"/>
<int:service-activator ref="srv" method="httpTest" input-channel="ch.http.aggr"/>
<bean id="srv" class="demo.IntService"/>
<bean id="aggr" class="demo.HttpAggregator"/>
</beans>
The aggreagator:
public class HttpAggregator{
public List<Req> add(List<Req> reqs) {
System.out.println(reqs);
return reqs;
}
}
The service:
public class IntService {
public void httpTest(Req msg){
System.out.println(msg);
}
}
Req is just a POJO.
The problem is that the aggregator method is never called. Without the aggregtor the messages are passed to the service activator with no problem.
Using Spring Integration 3.0.2.RELEASE (Spring Boot 1.0.2.RELEASE)
Edit:
When I changed correlation-strategy-expression="headers['id']" to correlation-strategy-expression="payload.id"(the Req object has property id) it works when I pass different ids for every chunk(e.g. id=1 for the first 10; 2 for the next 10...) Looks that that's how the correlation strategy works. How can I baypass it? I just want to limit the size of the aggregated list.
Right; you have to correlate on something; using the headers['id'] will end up with lots of group of 1 item which will never meet the release strategy.
For a simple use case like yours, correlate on a literal - e.g. correlation-expression="'foo'" and set expire-groups-on-completion="true". This resets the group after the release, so a new one (with the same correlation id) can start on the next message.
If you want to release a partial group after some timeout, you will need a MessageGroupStoreReaper. Or, if you can upgrade to 4.0.x, the aggregator now has a group-timeout (or group-timeout-expression).
Related
I have a outbound-channel-adapter, where the relevant configuration is shown below.
<int:outbound-channel-adapter channel="foo-fileChannel" ref="foo-handlerTarget" method="handleFeedFile">
<int:poller fixed-delay="5000" receive-timeout="1000" max-messages-per-poll="10" />
</int:outbound-channel-adapter>
<int:channel id="foo-fileChannel">
<int:queue />
</int:channel>
<bean id="foo-handlerTarget" class="com.abc.FooFeedHandlerImpl">
<property name="fooDescriptorFile" value="${feed.foo.fooDescriptorFile}" />
<property name="fileIdRegex" ref="foo-fileRegex" />
<property name="processId" value="${feed.processId}" />
<property name="workingLocation" value="${feed.foo.workingLocation}" />
<property name="remoteLocation" value="${feed.foo.remoteLocation}" />
<property name="stalenessThreshold" value="${feed.foo.stalenessThreshold}" />
</bean>
And in FooFeedHandlerImpl...
public void handleFeedFile(File retrievedFile) {
handleFeedFile(retrievedFile, null);
}
public void handleFeedFile(File retrievedFile, String processKey) {
if (isHandlerForFileName(retrievedFile.getName())) {
processFeed(retrievedFile, processKey);
}
}
Questions:
Which handleFeedFile method gets invoked by the channel adapter?
When I invoke a method in the application code using Spring integration, how are the method parameters determined?
Thanks for any help!
Edit:
I ran my process locally (downloaded a local SFTP server - http://www.coreftp.com/server/index.html) and determined that the handleFeedFile(File file) method was invoked.
You probably want to refer to F.6 Message Mapping rules and conventions.
Multiple parameters could create a lot of ambiguity with regards to determining the appropriate mappings. The general advice is to annotate your method parameters with #Payload and/or #Header/#Headers Below are some of the examples of ambiguous conditions which result in an Exception being raised.
and:
Multiple methods:
Message Handlers with multiple methods are mapped based on the same rules that are described above, however some scenarios might still look confusing.
If you're not in a position to annotate your target methods, then you might be able to use a SpEL expression to call your intended method:
3.3.2 Configuring An Outbound Channel Adapter
Like many other Spring Integration components, the and also provide support for SpEL expression evaluation. To use SpEL, provide the expression string via the 'expression' attribute instead of providing the 'ref' and 'method' attributes that are used for method-invocation on a bean. When an Expression is evaluated, it follows the same contract as method-invocation where: the expression for an will generate a message anytime the evaluation result is a non-null value, while the expression for an must be the equivalent of a void returning method invocation.
According to the documentation on Spring integration, the POJO (bean foo-handlerTarget) in your case will get called with a Message object containing the payload. Have you executed your code? I'd expect it generates a NoSuchMethodError.
You need a
public void handleFeedFile(Message<?> message);
method.
I have a configuration to read the data from DB using jdbc:inbound-channel-adapter. The configuration:
<int-jdbc:inbound-channel-adapter query="SELECT * FROM requests WHERE processed_status = '' OR processed_status IS NULL LIMIT 5" channel="requestsJdbcChannel"
data-source="dataSource" update="UPDATE requests SET processed_status = 'INPROGRESS', date_processed = NOW() WHERE id IN (:id)" >
<int:poller fixed-rate="30000" />
</int-jdbc:inbound-channel-adapter>
<int:splitter input-channel="requestsJdbcChannel" output-channel="requestsQueueChannel"/>
<int:channel id="requestsQueueChannel">
<int:queue capacity="1000"/>
</int:channel>
<int:chain id="requestsChain" input-channel="requestsQueueChannel" output-channel="requestsApiChannel">
<int:poller max-messages-per-poll="1" fixed-rate="1000" />
.
.
</int:chain>
In the above configuration, I have defined the jdbc poller with fixed-rate of 30 seconds. When there is direct channel instead of requestsQueueChannel the select query gets only 5 rows (since I am using limiting the rows in select query) and waits for another 30 seconds for next poll.
But after I introduce requestsQueueChannel with queue and added poller inside requestsChain, the jdbc-inbound doesn't work as expected. It doesn't wait for another 30 second for next poll. Sometimes it polls the DB twice in a row(within a second) as if there are 2 threads running and gets two sets of rows from DB. However, there is no async handoff except these mentioned above.
My understanding is that even if there is requestsQueueChannel, once it executes the select query it should wait for another 30 seconds to poll the DB. Is there anything I am missing? I just want to understand the behavior of this configuration.
When using a DirectChannel the next poll isn't considered until the current one ends.
When using a QueueChannel (or task executor), the poller is free to run again.
Inbound adapters have max-messages-per-poll set to 1 by default so your config should work as expected. Can you post a DEBUG log somewhere?
The issue of Spring integration pollers activating twice, as though they are 2 threads, is the basically the same problem I came across here, with file system pollers:
How to prevent duplicate Spring Integration service activations when polling directory
Apparently this is a relatively common misconfiguration, where Spring root and servlet contexts both load the Spring Integration configuration. As a result of this, there are indeed two threads, and pollers can be seen to activate twice within their polling period. Usually within a few seconds of each other, as each will start when its context loads.
My approach to ensuring that the Spring Integration configuration was only loaded in a single context was to structure the project packages to ensure separation.
First define a web config which only picks up classes under the "web" package.
#Configuration
#ComponentScan(basePackages = { "com.myapp.web" })
#EnableWebMvc
public class WebConfig extends WebMvcConfigurerAdapter {
#Override
public void configureDefaultServletHandling(
DefaultServletHandlerConfigurer configurer) {
configurer.enable();
}
}
Create separate root configuration classes to load beans such as services and repositories, which do not belong in the servlet context. One of these should load the Spring Integration configuration. i.e.:
#Configuration
#ComponentScan(basePackages = { "com.myapp.eip" })
#ImportResource(value = { "classpath:META-INF/spring/integration-context.xml" })
public class EipConfig {
}
An additional factor in the configuration that took a little while to work out, was that my servlet filters and web security config needed to be in the root context rather than the servlet context.
I have a grails 2.2 application that uses the JMS plugin (using version 1.3).
The situation I have is that when my server starts up, the JMS plugin initialises and the Listener service grabs any waiting messages on the queue before the server has completed setting up.
Specifically, it hits the first hibernate query in the code and fails with the following error:
| Error 2014-10-14 11:06:56,535 [ruleInputDataListenerJmsListenerContainer-1] ERROR drms.RuleInputDataListenerService - Message Exception: Failed to process JMS Message.
groovy.lang.MissingMethodException: No signature of method: au.edu.csu.drms.Field.executeQuery() is applicable for argument types: () values: []
Possible solutions: executeQuery(java.lang.String), executeQuery(java.lang.String, java.util.Collection), executeQuery(java.lang.String, java.util.Map), executeQuery(java.lang.String, java.util.Collection, java.util.Map), executeQuery(java.lang.String, java.util.Map, java.util.Map)
The code in question is correct:
String query = "SELECT f FROM field f WHERE (attributeName = :attributeName AND entityName = :entityName)"
def fieldList = Field.executeQuery(query, [attributeName: _attributeName, entityName: _entityName])
From what I can tell, it's a matter of hibernate not being initialised when the JMS listener executes the onMessage method. It also happens with a withCriteria or any other hibernate query method.
It only happens when there are messages on the queue on server start-up and fails for each message waiting. Once the queue is completed and it processes new messages, it works fine.
Is there a way to either get hibernate to initialise in time or to delay the Listener service from execute (much like the Quartz plugin that has a start up delay timer)?
Update:
I don't use a bean configuration because it's a daemon type application - we have no beans to define.
Is there a way to use #DependsOn and have my listener depend on Hibernate itself?
Let's say you have the following EntityManagerFactory configuration:
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="jpaDialect" ref="jpaDialect"/>
</bean>
You need to make your JMS connection factory depend on entityManagerFactory:
<bean id="jmsConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory"
destroy-method="stop" depends-on="jmsBroker, entityManagerFactory">
<property name="connectionFactory" ref="activeMQConnectionFactory"/>
</bean>
Unfortunately, the #DependsOn notation didn't work due to the nature of my application (no bean configuration).
Given that there are a few bugs/problems with the Grails JMS plugin, the solution to my problem was to use the following code prior to processing the JMS Message:
def onMessage(msg) {
try {
Rule.withNewTransaction {
log.info("Hibernate is up and running!")
}
} catch (Exception e) {
resendMessage(msg)
}
// rest of code...
}
Where I use a transaction to test if Hibernate is fully initialised (already tested that it's not when the JMS listener fires at start up) and if it catches an exception, it will resend the message back to the queue for re-processing.
I am doing the chat project in Java with Spring 3.2.
Normally in Java I can create a thread like this:
public class Listener extends Thread{
public void run(){
while(true){
}
}
}
and start the thread by start().
But in Spring 3.x is there any special classes or any special way to achieve the thread functionality?
My requirements:
I have 2 telecom domain servers. In my application I need to initialize the servers to create the protocol.
After the servers are initialized, I need to start two threads to listen the responses from the telecom domain servers.
What I have done was given below:
public class Listener extends Thread{
public void run(){
while(true){
if(ServerOne.protocol != null){
Message events = ServerOne.protocol.receive();
//Others steps to display the message in the view
}
}
}
}
Is it possible to do the java thread functionality with the quartz ?
If possible which one is better ? if no means , what is the reason ?
Hope our stack members will give the better solution.
Spring's TaskExecutor is a good way to run these threads in a managed environment. You could refer to http://static.springsource.org/spring/docs/3.0.x/reference/scheduling.html for some examples.
Java has what you need. I advise against using Thread manually. You should always use something that manages the threads for you. Please have a look at this: https://stackoverflow.com/a/1800583/2542027
You could use Spring's ThreadPoolTaskExecutor
You can define your executor as such in you configuration file
<bean id="taskExecutor" class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor" destroy- method="shutdown">
<property name="corePoolSize" value="2" />
<property name="maxPoolSize" value="2" />
<property name="queueCapacity" value="10" />
</bean>
<task:annotation-driven executor="taskExecutor" />
In your Listener you could have a method that does all the work in it and annotate this method with the #Async annotation. Ofcourse, Listener should also be Spring managed.
public class Listener{
#Async
public void doSomething(){
while(true){
if(ServerOne.protocol != null){
Message events = ServerOne.protocol.receive();
//Others steps to display the message in the view
}
}
}
}
Now everytime doSomething is called, a new thread will be created if there are less than corePoolSize number of threads being run by the executor. Once corePoolSize number of threads are created, every subsequent call to doSomething will create a new thread only if there are more than corePoolSize but less than maxPoolSize running (not idle) threads and the thread queue is full. More about pool sizes can be read in the java docs
Note : While using #Async you might encounter the following exception if CGLIB library is not made available in your application.
Cannot proxy target class because CGLIB2 is not available. Add CGLIB to the class path or specify proxy interfaces.
To overcome this without having to add CGLIB dependencies, you can create an interface IListener that has doSomething() in it and then have Listener implement IListener
Since Spring 3, you can use #Schedule annotation:
#Service
public class MyTest {
...
#Scheduled(fixedDelay = 10)
public getCounter() {...}
}
with <context:component-scan base-package="ch/test/mytest"> and <task:annotation-driven/> in the context file
Please refer to this tutorial:http://spring.io/blog/2010/01/05/task-scheduling-simplifications-in-spring-3-0/
I have a remote service that I'm calling to load pricing data for a product, when a specific event occurs. Once loaded, the product pricing is then broadcast for another consumer to process elsewhere.
The calling code doesn't care about the response - it's fire-and-forget, responding to an application event, and triggering a new workflow.
In order to keep the calling code as quick as possible, I'd like to use #Async here, but I'm having mixed results.
The basic flow is:
CallingCode -> ProductPricingGateway -> Aggregator -> BatchedFetchPricingTask
Here's the Async setup:
<task:annotation-driven executor="executor" scheduler="scheduler"/>
<task:scheduler id="scheduler" pool-size="1" />
<task:executor id="executor" keep-alive="30" pool-size="10-20" queue-capacity="500" rejection-policy="CALLER_RUNS" />
The other two components used are a #Gateway, which the intiating code calls, and a down-stream #ServiceActivator, that sits behind an aggregator. (Calls are batched into small groups).
public interface ProductPricingGateway {
#Gateway(requestChannel="product.pricing.outbound.requests")
public void broadcastPricing(ProductIdentifer productIdentifier);
}
// ...elsewhere...
#Component
public class BatchedFetchPricingTask {
#ServiceActivator(inputChannel="product.pricing.outbound.requests.batch")
public void fetchPricing(List<ProductIdentifer> identifiers)
{
// omitted
}
}
And the other relevant intergation config:
<int:gateway service-interface="ProductPricingGateway"
default-request-channel="product.pricing.outbound.requests" />
<int:channel id="product.pricing.outbound.requests" />
<int:channel id="product.pricing.outbound.requests.batch" />
I find that if I declare #Async on the #ServiceActivator method, it works fine.
However, if I declare it on the #Gateway method (which seems like a more appropriate place), the aggregator is never invoked.
Why?
I'm struggling to see how #Async would work anywhere here, because the starting point is when your code calls the ProductPricingGateway.broadcastPricing() method.
With #Async on the gw, what would the scheduler send?
Similarly, with #Async on the service, what would the scheduler pass in in identifiers?
The correct way to go async as soon as possible would be to make product.pricing.outbound.requests an ExecutorChannel...
http://static.springsource.org/spring-integration/reference/html/messaging-channels-section.html#executor-channel
http://static.springsource.org/spring-integration/reference/html/messaging-channels-section.html#channel-configuration-executorchannel
...where the calling thread hands off the message to a task executor.