I have several Spring-Integration elements configured in the XML file (see below)
From the amqp channel adapter the messages are directed to the router integrationSecondaryRouter that has implementation integrationRouterImpl.
If there is a not caught exception in integrationRouterImpl I expect that the Rabbit MQ will send the message again and again. However, this does not happen. The Rabbit MQ monitor does not show any messages accumulation. An error in my configuration?
<int-amqp:inbound-channel-adapter
channel="integrationFrontDoorQueueChannel"
queue-names="${integration.creation.orders.queue.name}"
header-mapper="integrationHeaderMapper"
connection-factory="connectionFactory"
error-channel="errorChannel"
/>
<int:chain
id="integrationFrontDoorQueueChain"
input-channel="integrationFrontDoorQueueChannel"
output-channel="integrationRouterChannel">
<int:transformer ref="integrationJsonPayloadTransformer" method="transformMessagePayload"/>
<int:filter ref="integrationNonDigitalCancellationFilter" method="filter"/>
<int:filter ref="integrationPartnerFilter" method="filter"/>
<int:filter ref="integrationOrderDtoDgcAndGoSelectFilter" method="filter"/>
</int:chain>
<int:header-value-router
id="integrationPrimaryRouter"
input-channel="integrationRouterChannel"
default-output-channel="integrationFrontDoorRouterChannel"
resolution-required="false"
header-name="#{T(com.smartdestinations.constants.SdiConstants).INTEGRATION_PAYLOAD_ACTION_HEADER_KEY}">
<int:mapping
value="#{T(com.smartdestinations.service.integration.dto.IntegrationAction).EXCLUSION_SCAN.name()}"
channel="integrationExclusionChannel"
/>
</int:header-value-router>
<int:router
id="integrationSecondaryRouter"
ref="integrationRouterImpl"
input-channel="integrationFrontDoorRouterChannel"
method="route"
resolution-required="false"
default-output-channel="nullChannel"
/>
Look, you have error-channel="errorChannel" and the Documentation on the matter points out:
The default "errorChannel" is a PublishSubscribeChannel.
Yes, there is one subscriber. but it just _org.springframework.integration.errorLogger.
Since there is no anyone who re-throws your exception to the SimpleMessageListenerContainer, thefore no reason to nack message and redelive it again.
Related
Hi how i can delete the source file from SFTP after consuming the file . below is my configuration . Its consuming the file properly and processing it . But in the next poll its reading the same again . I like to delete the source file or avoid the same file reading again.i am using 4.3.13
<int-sftp:inbound-streaming-channel-adapter id="sftpAdapter"
session-factory="sftpSessionFactory"
filename-pattern="*.xml"
channel="receiveChannel"
remote-directory="/tmp/charge/">
</int-sftp:inbound-streaming-channel-adapter>
<int:poller fixed-rate="30000" max-messages-per-poll="1" id="ChargePoller"/>
<int:channel id="receiveChannel">
<int:queue/>
</int:channel>
<int:stream-transformer id="withCharset" charset="UTF-8" input-channel="receiveChannel" output-channel="fileInString" />
<int:service-activator id="ChargeFeedListener" input-channel="fileInString" method="onMessage" >
<bean class="listener.ChargeFeedListener">
<constructor-arg name="ChargeService" ref="ChargeService"/>
</bean>
</int:service-activator>
The <int-sftp:inbound-streaming-channel-adapter> stores this info into headers of the message for remote file stream it produces:
return getMessageBuilderFactory()
.withPayload(session.readRaw(remotePath))
.setHeader(IntegrationMessageHeaderAccessor.CLOSEABLE_RESOURCE, session)
.setHeader(FileHeaders.REMOTE_DIRECTORY, file.getRemoteDirectory())
.setHeader(FileHeaders.REMOTE_FILE, file.getFilename())
.setHeader(FileHeaders.REMOTE_HOST_PORT, session.getHostPort())
.setHeader(FileHeaders.REMOTE_FILE_INFO,
this.fileInfoJson ? file.toJson() : file);
Pay attention to the FileHeaders.REMOTE_DIRECTORY and FileHeaders.REMOTE_FILE headers.
Such an info could used for the <int-sftp:outbound-gateway> with command="rm" as a remote-filename-generator-expression="headers[file_remoteDirectory]+'/'+headers[file_remoteFile]".
This gateway could be used as a second subscriber for the fileInString message channel which you should make as a <publish-subscribe-channel>: https://docs.spring.io/spring-integration/docs/current/reference/html/core.html#channel-implementations-publishsubscribechannel.
See also this in docs: https://docs.spring.io/spring-integration/docs/current/reference/html/sftp.html#sftp-streaming-java-config
We have a Java/spring/tomcat application deployed on a RHEL 7.0 VM, which uses AlejandroRivera/embedded-rabbitmq and it starts the Rabbitmq server as soon as the war gets deployed, and it connects to it. We have multiple queues that we use to handle and filter out events.
The flow is something like this:
event that we received -> publish event queue -> listener class filters events --> publish to another queue for processing
-> we publish to yet another queue for logging.
The issue is:
Processing starts normally, we can see messages flowing though the queues, but after some time the listener class, stops receiving events. It seems like we were able to publish it to the RabbitMQ channel, but it never got out of the queue to the listener.
This seems to start degrading causing events to be processed after some time, rising up till minutes. The load isn't as high, it's like around 200 events, from which we care about it's only a handful of them.
What we tried:
Initially the queues had pre-fetch set to 1, and consumers to be min of 2 and max of 5, we removed pre-fetch and we added more consumers as max concurrency setting, but the issue is still there, the delay just takes longer to present, but after a few minutes, the processing starts to take around 20/30 seconds.
We see in the logs that we published the event to the queue, and we see the log that we got it off the queue with a delay. So there's nothing running in our code in the middle to generate this delay.
As far as we can tell, the rest of the queues seem to process messages properly, but it's this one that gets in this stuck mode..
The errors that I see, are the following, but I'm usure what it means and if it's related:
Jun 4 11:16:04 server: [pool-3-thread-10] ERROR com.rabbitmq.client.impl.ForgivingExceptionHandler - Consumer org.springframework.amqp.rabbit.listener.BlockingQueueConsumer$InternalConsumer#70dfa413 (amq.ctag-VaWc-hv-VwcUPh9mTQTj7A) method handleDelivery for channel AMQChannel(amqp://agent#127.0.0.1:5672/,198) threw an exception for channel AMQChannel(amqp://agent#127.0.0.1:5672/,198)
Jun 4 11:16:04 server: java.io.IOException: Unknown consumerTag
Jun 4 11:16:04 server: at com.rabbitmq.client.impl.ChannelN.basicCancel(ChannelN.java:1266)
Jun 4 11:16:04 server: at sun.reflect.GeneratedMethodAccessor180.invoke(Unknown Source)
Jun 4 11:16:04 server: at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Jun 4 11:16:04 server: at java.lang.reflect.Method.invoke(Method.java:498)
Jun 4 11:16:04 server: at org.springframework.amqp.rabbit.connection.CachingConnectionFactory$CachedChannelInvocationHandler.invoke(CachingConnectionFactory.java:955)
Jun 4 11:16:04 server: at com.sun.proxy.$Proxy119.basicCancel(Unknown Source)
Jun 4 11:16:04 server: at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer$InternalConsumer.handleDelivery(BlockingQueueConsumer.java:846)
Jun 4 11:16:04 server: at com.rabbitmq.client.impl.ConsumerDispatcher$5.run(ConsumerDispatcher.java:149)
Jun 4 11:16:04 server: at com.rabbitmq.client.impl.ConsumerWorkService$WorkPoolRunnable.run(ConsumerWorkService.java:100)
Jun 4 11:16:04 server: at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
Jun 4 11:16:04 server: at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
Jun 4 11:16:04 server: at java.lang.Thread.run(Thread.java:748)
This one happens on shutdown of the application, but I've seen it happen while the application is still running..
2018-06-05 13:22:45,443 ERROR CachingConnectionFactory$DefaultChannelCloseLogger - Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - unknown delivery tag 109, class-id=60, method-id=120)
I'm not sure how to address these two errors, nor if they are related.
Here's my Spring config:
<!-- Queues -->
<rabbit:queue id="monitorIncomingEventsQueue" name="MonitorIncomingEventsQueue"/>
<rabbit:queue id="interestingEventsQueue" name="InterestingEventsQueue"/>
<rabbit:queue id="textCallsEventsQueue" name="TextCallsEventsQueue"/>
<rabbit:queue id="callDisconnectedEventQueue" name="CallDisconnectedEventQueue"/>
<rabbit:queue id="incomingCallEventQueue" name="IncomingCallEventQueue"/>
<rabbit:queue id="eventLoggingQueue" name="EventLoggingQueue"/>
<!-- listeners -->
<bean id="monitorListener" class="com.example.rabbitmq.listeners.monitorListener"/>
<bean id="interestingEventsListener" class="com.example.rabbitmq.listeners.InterestingEventsListener"/>
<bean id="textCallsEventListener" class="com.example.rabbitmq.listeners.TextCallsEventListener"/>
<bean id="callDisconnectedEventListener" class="com.example.rabbitmq.listeners.CallDisconnectedEventListener"/>
<bean id="incomingCallEventListener" class="com.example.rabbitmq.listeners.IncomingCallEventListener"/>
<bean id="eventLoggingEventListener" class="com.example.rabbitmq.listeners.EventLoggingListener"/>
<rabbit:listener-container connection-factory="connectionFactory" message-converter="defaultMessageConverter" concurrency="5" max-concurrency="40" acknowledge="none">
<rabbit:listener queues="interestingEventsQueue" ref="interestingEventsListener" method="handleIncomingMessage"/>
</rabbit:listener-container>
<rabbit:listener-container connection-factory="connectionFactory" message-converter="defaultMessageConverter" concurrency="5" max-concurrency="20" acknowledge="none">
<rabbit:listener queues="textCallsEventsQueue" ref="textCallsEventListener" method="handleIncomingMessage"/>
</rabbit:listener-container>
<rabbit:listener-container connection-factory="connectionFactory" message-converter="defaultMessageConverter" concurrency="5" max-concurrency="20" acknowledge="none">
<rabbit:listener queues="callDisconnectedEventQueue" ref="callDisconnectedEventListener" method="handleIncomingMessage"/>
</rabbit:listener-container>
<rabbit:listener-container connection-factory="connectionFactory" message-converter="defaultMessageConverter" concurrency="5" max-concurrency="30" acknowledge="none">
<rabbit:listener queues="incomingCallEventQueue" ref="incomingCallEventListener" method="handleIncomingMessage"/>
</rabbit:listener-container>
<rabbit:listener-container connection-factory="connectionFactory" message-converter="defaultMessageConverter" concurrency="1" max-concurrency="3" acknowledge="none">
<rabbit:listener queues="monitorIncomingEventsQueue" ref="monitorListener" method="handleIncomingMessage"/>
</rabbit:listener-container>
<rabbit:listener-container connection-factory="connectionFactory" message-converter="defaultMessageConverter" concurrency="5" max-concurrency="10" acknowledge="none">
<rabbit:listener queues="EventLoggingQueue" ref="eventLoggingEventListener" method="handleLoggingEvent"/>
</rabbit:listener-container>
<rabbit:connection-factory id="connectionFactory" host="${host.name}" port="${port.number}" username="${user.name}" password="${user.password}" connection-timeout="20000"/>
I've read here that the delay on processing could be caused by a network problem, but in this case the server and the app are on the same VM. It's a locked down environment, so most ports aren't open, but I doubt that's what's wrong.
More logs: https://pastebin.com/4QMFDT7A
Any help is appreciated,
Thanks,
I need to see much more log than that - this is the smoking gun:
Storing...Storing delivery for Consumer#a2ce092: tags=[{}]
The (consumer) tags is empty, which means the consumer has already been canceled at that time (for some reason, which should appear earlier in the log).
If there's any chance you could reproduce with 1.7.9.BUILD-SNAPSHOT, I added some TRACE level logging which should help diagnosing this.
EDIT
In reply to your recent comment on rabbitmq-users...
Can you try with fixed concurrency? Variable concurrency in Spring AMQP's container is often not very useful because consumers will typically only be reduced if the entire container is idle for some time.
It might explain, however, why you are seeing consumers being canceled.
Perhaps there are/were some race conditions in that logic; using a fixed number of consumers (don't specify max...) will avoid that; if you can try it, it will at least eliminate that as a possibility.
That said, I am confused (I didn't notice this in your Stack Overflow configuration); with acknowledge="none" there should be no acks being sent to the broker (NONE is used to set the autoAck)
String consumerTag = this.channel.basicConsume(queue, this.acknowledgeMode.isAutoAck(), ...
and
public boolean isAutoAck() {
return this == NONE;
}
Are you sending acks from your code? If so , the ack mode should be MANUAL. I can't see a scenario where the container will send an ack for a NONE ack mode.
I am using payload-deserializer-transformer in my TCP client as follows.
<context:property-placeholder />
<int:gateway id="gw"
service-interface="myGateway"
default-request-channel="objectIn"
default-reply-channel="objectOut" />
<int-ip:tcp-connection-factory id="client"
type="client"
host="${client.server.TCP.host}"
port="${client.server.TCP.port}"
single-use="true"
so-timeout="10000" />
<int:channel id="objectIn" />
<int:payload-serializing-transformer input-channel="objectIn" output-channel="bytesOut"/>
<int:channel id="bytesOut" />
<int-ip:tcp-outbound-gateway id="outGateway"
request-channel="bytesOut"
reply-channel="bytesIn"
connection-factory="client"
request-timeout="10000"
reply-timeout="10000" />
<int:channel id="bytesIn" />
<int:payload-deserializing-transformer input-channel="bytesIn" output-channel="objectOut" />
<int:channel id="objectOut" />
The above works fine for message length < 2048 but if the message exceeds this limit I get following error.
Caused by: java.io.IOException: CRLF not found before max message length: 2048
at org.springframework.integration.ip.tcp.serializer.ByteArrayCrLfSerializer.fillToCrLf(ByteArrayCrLfSerializer.java:66)
at org.springframework.integration.ip.tcp.serializer.ByteArrayCrLfSerializer.deserialize(ByteArrayCrLfSerializer.java:44)
at org.springframework.integration.ip.tcp.serializer.ByteArrayCrLfSerializer.deserialize(ByteArrayCrLfSerializer.java:31)
at org.springframework.integration.ip.tcp.connection.TcpNetConnection.getPayload(TcpNetConnection.java:120)
at org.springframework.integration.ip.tcp.connection.TcpMessageMapper.toMessage(TcpMessageMapper.java:113)
at org.springframework.integration.ip.tcp.connection.TcpNetConnection.run(TcpNetConnection.java:165)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
How can I set maxMessageSize property on the payload-deserializing-transformer in this case?
This has nothing to do with the transformer; the error is in the outbound gateway.
First of all, you should not be using text-based delimiting for inbound TCP messages; a serialized object contains binary data and might contain CRLF (0x0d0a) somewhere in the middle.
You should be using one of the binary-capable deserializers in the gateway.
You can read about TCP serializers/deserializers in the reference manual.
You should configure the outbound gateway to use a ByteArrayLengthHeaderSerializer in the serializer and deserializer attributes; it can handle binary payloads.
The remote system will also need to be changed to use a length header instead of using CRLF to detect the end of a message. If the remote system is also Spring Integration, simply change its serializer/deserializer too.
For other readers who are using text-based messaging, the ByteArrayCrlfSerializer can be configured with a maxMessageSize which defaults to 2048.
The ByteArrayLengthHeaderSerializer also has a maxMessageSize (also 2048) which is configurable - this is to prevent OOM conditions when a bad message is received.
Please share any links to configure activiti with camel. All examples I could get were showing SERVICETASK->CAMELROUTE->FILE and then FILE->RECIEVETASK(Activiti)
This involves some BUSINESS_KEY, which I couldn't figure out what exactly is
I need an example showing SERVICE TASK -> CAMEL ROUTE-> RECEIEVTASK(Signal the Activiti). I dont know why but this example gives me error
file: activiti-flow.bpmn20.xml:
<process id="camelprocess" name="My process" isExecutable="true">
<startEvent id="startevent1" name="Start"></startEvent>
<serviceTask id="servicetask1" name="Service Task" activiti:async="true" activiti:delegateExpression="${camel}"></serviceTask>
<receiveTask id="receivetask1" name="Receive Task"></receiveTask>
<endEvent id="endevent1" name="End"></endEvent>
<sequenceFlow id="flow1" sourceRef="startevent1" targetRef="servicetask1"></sequenceFlow>
<sequenceFlow id="flow2" sourceRef="servicetask1" targetRef="receivetask1"></sequenceFlow>
<sequenceFlow id="flow3" sourceRef="receivetask1" targetRef="endevent1"></sequenceFlow>
activiti-camel-spring.xml
<bean id="camel" class="org.activiti.camel.CamelBehaviour">
<constructor-arg index="0">
<list>
<bean class="org.activiti.camel.SimpleContextProvider">
<constructor-arg index="0" value="camelprocess" />
<constructor-arg index="1" ref="camelContext" />
</bean>
</list>
</constructor-arg>
</bean>
<camel:camelContext id="camelContext">
<camel:route>
<camel:from uri="activiti:camelprocess:servicetask1"/>
<camel:to uri="bean:serviceActivator?method=doSomething(${body})"/>
<camel:to uri="activiti:camelprocess:receivetask1"/>
</camel:route>
</camel:camelContext>
Error is:
1|ERROR|org.slf4j.helpers.MarkerIgnoringBase:161||||>> Failed delivery for (MessageId: ID-viscx73-PC-49557-1376961951564-0-1 on ExchangeId: ID-viscx73-PC-49557-1376961951564-0-2). Exhausted after delivery attempt: 1 caught: org.activiti.engine.ActivitiIllegalArgumentException: Business key is null
at org.activiti.engine.impl.ProcessInstanceQueryImpl.processInstanceBusinessKey(ProcessInstanceQueryImpl.java:87)
at org.activiti.camel.ActivitiProducer.findProcessInstanceId(ActivitiProducer.java:78)
at org.activiti.camel.ActivitiProducer.signal(ActivitiProducer.java:58)
at org.activiti.camel.ActivitiProducer.process(ActivitiProducer.java:49)
at org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process (AsyncProcessorConverterHelper.java:61)
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73)
All forums/links that has ACTIVITI->CAMELROUTE(FILE) then
in other route CAMEL_FILE->RECIEVETASK
And they suggest to add some key like PROCESS_KEY_PROPERTY or PROCESS_ID_PROPERTY
I don't get where these properties fit into
I am trying to work it from example at link
http://bpmn20inaction.blogspot.in/2013/03/using-camel-routes-in-activiti-made.html
I am not sure whether process after giving service task to camel, is not moving at all to receive task and waiting up there or CAMEL is unable to find receive task
Please share some suggestion on this
Thanks
It worked by adding inbuilt camel queues as shown in the example. I thought they were just shown as example for various routes. But by passing to queue actually the ServiceTask was made asynchronous in camel and later from queue they were read and invoked the receive task in activiti
<camel:to uri="seda:tempQueue"/>
<camel:from uri="seda:tempQueue"/>
Thanks
I don't know whether you'd solved the problem or not, but actually I faced the same problem.
And finally, I found a solution of the problem.
In fact, it is correct that PROCESS_ID_PROPERTY property must be provided otherwise the activiti engine doesn't know to execute which process instance. So, I just set PROCESS_ID_PROPERTY value in the header when sending the JMS to activemq, and when the message back, just set the propertiy from header. Something likes:
from("activiti:process:simpleCall").setHeader("PROCESS_ID_PROPERTY", simple("${property.PROCESS_ID_PROPERTY}")).to("activemq:queue:request");
from("activemq:queue:reply").setProperty("PROCESS_ID_PROPERTY", simple("${header.PROCESS_ID_PROPERTY}")).to("activiti:process:simpleReceive");
Hope it will help you.
I'm trying to upgrade from Camel 2.0 to 2.6
I have this in my applicationContext-camel.xml file...
<camel:route >
<camel:from uri="transactionSaleBuffer" />
<camel:policy ref="routeTransactionPolicy"/>
<camel:transacted ref="transactionManagerETL" />
<camel:to uri="detailFactProcessor" />
</camel:route>
by adding in the two lines in the middle (policy and transacted) I get the exception...
Caused by: org.apache.camel.FailedToCreateRouteException: Failed to create route route2 at: >>> From[transactionSaleBuffer] <<< in route: Route[[From[transactionSaleBuffer]] -> [Tr
ansacted[ref:trans... because of Route route2 has no output processors. You need to add outputs to the route such as to("log:foo").
I can see this is because the Camel class RouteDefinition.java makes a call to ProcessorDefinitionHelper.hasOutputs(outputs, true).
This passes in an array of one Object ([Transacted[ref:transactionManagerETL]])
This one object has one two children
[Transacted[ref:transactionManagerETL]]
CHILD-[Policy[ref:routeTransactionPolicy],
CHILD-To[detailFactProcessor]
The Policy child has no outputs, so the exception is thrown.
Yet I don't know how to add a child, my XML above matches the schema.
Maybe I'm missing something else?
My setup matches the example...Apache Camel: Book in One Page (See section: Camel 1.x - JMS Sample)
Can anyone please help me out.
Thanks!
Jeff Porter
Try as follows
<camel:route>
<camel:from uri="transactionSaleBuffer" />
<camel:transacted ref="transactionManagerETL" />
<camel:policy ref="routeTransactionPolicy">
<camel:to uri="detailFactProcessor" />
</camel:policy>
</camel:route>