How to delete File from SFTP while using inbound-streaming-channel-adapter - java

Hi how i can delete the source file from SFTP after consuming the file . below is my configuration . Its consuming the file properly and processing it . But in the next poll its reading the same again . I like to delete the source file or avoid the same file reading again.i am using 4.3.13
<int-sftp:inbound-streaming-channel-adapter id="sftpAdapter"
session-factory="sftpSessionFactory"
filename-pattern="*.xml"
channel="receiveChannel"
remote-directory="/tmp/charge/">
</int-sftp:inbound-streaming-channel-adapter>
<int:poller fixed-rate="30000" max-messages-per-poll="1" id="ChargePoller"/>
<int:channel id="receiveChannel">
<int:queue/>
</int:channel>
<int:stream-transformer id="withCharset" charset="UTF-8" input-channel="receiveChannel" output-channel="fileInString" />
<int:service-activator id="ChargeFeedListener" input-channel="fileInString" method="onMessage" >
<bean class="listener.ChargeFeedListener">
<constructor-arg name="ChargeService" ref="ChargeService"/>
</bean>
</int:service-activator>

The <int-sftp:inbound-streaming-channel-adapter> stores this info into headers of the message for remote file stream it produces:
return getMessageBuilderFactory()
.withPayload(session.readRaw(remotePath))
.setHeader(IntegrationMessageHeaderAccessor.CLOSEABLE_RESOURCE, session)
.setHeader(FileHeaders.REMOTE_DIRECTORY, file.getRemoteDirectory())
.setHeader(FileHeaders.REMOTE_FILE, file.getFilename())
.setHeader(FileHeaders.REMOTE_HOST_PORT, session.getHostPort())
.setHeader(FileHeaders.REMOTE_FILE_INFO,
this.fileInfoJson ? file.toJson() : file);
Pay attention to the FileHeaders.REMOTE_DIRECTORY and FileHeaders.REMOTE_FILE headers.
Such an info could used for the <int-sftp:outbound-gateway> with command="rm" as a remote-filename-generator-expression="headers[file_remoteDirectory]+'/'+headers[file_remoteFile]".
This gateway could be used as a second subscriber for the fileInString message channel which you should make as a <publish-subscribe-channel>: https://docs.spring.io/spring-integration/docs/current/reference/html/core.html#channel-implementations-publishsubscribechannel.
See also this in docs: https://docs.spring.io/spring-integration/docs/current/reference/html/sftp.html#sftp-streaming-java-config

Related

CRLF not found before max message length: 2048 with payload-deserializing-transformer

I am using payload-deserializer-transformer in my TCP client as follows.
<context:property-placeholder />
<int:gateway id="gw"
service-interface="myGateway"
default-request-channel="objectIn"
default-reply-channel="objectOut" />
<int-ip:tcp-connection-factory id="client"
type="client"
host="${client.server.TCP.host}"
port="${client.server.TCP.port}"
single-use="true"
so-timeout="10000" />
<int:channel id="objectIn" />
<int:payload-serializing-transformer input-channel="objectIn" output-channel="bytesOut"/>
<int:channel id="bytesOut" />
<int-ip:tcp-outbound-gateway id="outGateway"
request-channel="bytesOut"
reply-channel="bytesIn"
connection-factory="client"
request-timeout="10000"
reply-timeout="10000" />
<int:channel id="bytesIn" />
<int:payload-deserializing-transformer input-channel="bytesIn" output-channel="objectOut" />
<int:channel id="objectOut" />
The above works fine for message length < 2048 but if the message exceeds this limit I get following error.
Caused by: java.io.IOException: CRLF not found before max message length: 2048
at org.springframework.integration.ip.tcp.serializer.ByteArrayCrLfSerializer.fillToCrLf(ByteArrayCrLfSerializer.java:66)
at org.springframework.integration.ip.tcp.serializer.ByteArrayCrLfSerializer.deserialize(ByteArrayCrLfSerializer.java:44)
at org.springframework.integration.ip.tcp.serializer.ByteArrayCrLfSerializer.deserialize(ByteArrayCrLfSerializer.java:31)
at org.springframework.integration.ip.tcp.connection.TcpNetConnection.getPayload(TcpNetConnection.java:120)
at org.springframework.integration.ip.tcp.connection.TcpMessageMapper.toMessage(TcpMessageMapper.java:113)
at org.springframework.integration.ip.tcp.connection.TcpNetConnection.run(TcpNetConnection.java:165)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
How can I set maxMessageSize property on the payload-deserializing-transformer in this case?
This has nothing to do with the transformer; the error is in the outbound gateway.
First of all, you should not be using text-based delimiting for inbound TCP messages; a serialized object contains binary data and might contain CRLF (0x0d0a) somewhere in the middle.
You should be using one of the binary-capable deserializers in the gateway.
You can read about TCP serializers/deserializers in the reference manual.
You should configure the outbound gateway to use a ByteArrayLengthHeaderSerializer in the serializer and deserializer attributes; it can handle binary payloads.
The remote system will also need to be changed to use a length header instead of using CRLF to detect the end of a message. If the remote system is also Spring Integration, simply change its serializer/deserializer too.
For other readers who are using text-based messaging, the ByteArrayCrlfSerializer can be configured with a maxMessageSize which defaults to 2048.
The ByteArrayLengthHeaderSerializer also has a maxMessageSize (also 2048) which is configurable - this is to prevent OOM conditions when a bad message is received.

Spring batch incorrect write skip count issue

I am new to spring batch, I have an issue where my write skip count is considered as entire count of chunk and not just the invalid records in the chunk.
For e.g., I am reading 500 records, with chunk size of 100 records per chunk.
Then if the first chunk has 2 invalid records then, all records after that invalid records are mentioned as invalid with 'invalid Exception', where as they not invalid.
So the write_skip_count in batch_step_execution goes as 100, for that batch, rather than 2.
But on other hand, the chunk with invalid records get re-processed and except the two invalid, all records properly reach the destination.
Functionality is achieved but the write_skip_count is wrong which prevent us from showing proper log. Please suggest what I am missing here.
and I can see below logs,
Checking for rethrow: count=1
Rethrow in retry for policy: count=1
Initiating transaction rollback on application exception
Below is the code snippet we tried so far,
<batch:step id="SomeStep">
<batch:tasklet>
<batch:chunk reader="SomeStepReader"
writer="SomeWriter" commit-interval="1000"
skip-limit="1000" retry-limit="1">
<batch:skippable-exception-classes>
<batch:include class="org.springframework.dao.someException" />
</batch:skippable-exception-classes>
<batch:retryable-exception-classes>
<batch:exclude class="org.springframework.dao.someException"/>
</batch:retryable-exception-classes>
</batch:chunk>
</batch:tasklet>
</batch:step>
After trying for sometime. I figured out that when writing to database happens in chunk and there is not transaction manager for this database, specially when your batch job is reading from one database datasource and writting to another database datasource.
In such case, batch fails entire chunk and skip count becomes chunk size. but it later processes the chuck with commit interval = 1 and skips only faulty record and processes the correct one. but the skip write count is now incorrect, as it should have been only the incorrect record count.
To avoid this, create a transaction manager for the database datasource where you are writting the data.
<bean id="SometransactionManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="SomedataSource" />
</bean>
Then in step where all transaction happen use the transaction manager,
<batch:step id="someSlaveStep">
<batch:tasklet transaction-manager="SometransactionManager">
<batch:chunk reader="SomeJDBCReader"
writer="SomeWriterBean" commit-interval="1000"
skip-limit="1000">
<batch:skippable-exception-classes>
<batch:include class="java.lang.Exception" />
</batch:skippable-exception-classes>
<batch:listeners>
<batch:listener ref="SomeSkipHandler" />
</batch:listeners>
</batch:chunk>
</batch:tasklet>
</batch:step>
Now here failure during write will happen under one transaction and faulty record will be handled gracefully and only faulty record with be logged in batch tables under write skip count.

How to force Rabbit MQ to accumulate and send messages again?

I have several Spring-Integration elements configured in the XML file (see below)
From the amqp channel adapter the messages are directed to the router integrationSecondaryRouter that has implementation integrationRouterImpl.
If there is a not caught exception in integrationRouterImpl I expect that the Rabbit MQ will send the message again and again. However, this does not happen. The Rabbit MQ monitor does not show any messages accumulation. An error in my configuration?
<int-amqp:inbound-channel-adapter
channel="integrationFrontDoorQueueChannel"
queue-names="${integration.creation.orders.queue.name}"
header-mapper="integrationHeaderMapper"
connection-factory="connectionFactory"
error-channel="errorChannel"
/>
<int:chain
id="integrationFrontDoorQueueChain"
input-channel="integrationFrontDoorQueueChannel"
output-channel="integrationRouterChannel">
<int:transformer ref="integrationJsonPayloadTransformer" method="transformMessagePayload"/>
<int:filter ref="integrationNonDigitalCancellationFilter" method="filter"/>
<int:filter ref="integrationPartnerFilter" method="filter"/>
<int:filter ref="integrationOrderDtoDgcAndGoSelectFilter" method="filter"/>
</int:chain>
<int:header-value-router
id="integrationPrimaryRouter"
input-channel="integrationRouterChannel"
default-output-channel="integrationFrontDoorRouterChannel"
resolution-required="false"
header-name="#{T(com.smartdestinations.constants.SdiConstants).INTEGRATION_PAYLOAD_ACTION_HEADER_KEY}">
<int:mapping
value="#{T(com.smartdestinations.service.integration.dto.IntegrationAction).EXCLUSION_SCAN.name()}"
channel="integrationExclusionChannel"
/>
</int:header-value-router>
<int:router
id="integrationSecondaryRouter"
ref="integrationRouterImpl"
input-channel="integrationFrontDoorRouterChannel"
method="route"
resolution-required="false"
default-output-channel="nullChannel"
/>
Look, you have error-channel="errorChannel" and the Documentation on the matter points out:
The default "errorChannel" is a PublishSubscribeChannel.
Yes, there is one subscriber. but it just _org.springframework.integration.errorLogger.
Since there is no anyone who re-throws your exception to the SimpleMessageListenerContainer, thefore no reason to nack message and redelive it again.

Activiti from Camel Receive Task throws error "ActivitiIllegalArgumentException: Business key is null"

Please share any links to configure activiti with camel. All examples I could get were showing SERVICETASK->CAMELROUTE->FILE and then FILE->RECIEVETASK(Activiti)
This involves some BUSINESS_KEY, which I couldn't figure out what exactly is
I need an example showing SERVICE TASK -> CAMEL ROUTE-> RECEIEVTASK(Signal the Activiti). I dont know why but this example gives me error
file: activiti-flow.bpmn20.xml:
<process id="camelprocess" name="My process" isExecutable="true">
<startEvent id="startevent1" name="Start"></startEvent>
<serviceTask id="servicetask1" name="Service Task" activiti:async="true" activiti:delegateExpression="${camel}"></serviceTask>
<receiveTask id="receivetask1" name="Receive Task"></receiveTask>
<endEvent id="endevent1" name="End"></endEvent>
<sequenceFlow id="flow1" sourceRef="startevent1" targetRef="servicetask1"></sequenceFlow>
<sequenceFlow id="flow2" sourceRef="servicetask1" targetRef="receivetask1"></sequenceFlow>
<sequenceFlow id="flow3" sourceRef="receivetask1" targetRef="endevent1"></sequenceFlow>
activiti-camel-spring.xml
<bean id="camel" class="org.activiti.camel.CamelBehaviour">
<constructor-arg index="0">
<list>
<bean class="org.activiti.camel.SimpleContextProvider">
<constructor-arg index="0" value="camelprocess" />
<constructor-arg index="1" ref="camelContext" />
</bean>
</list>
</constructor-arg>
</bean>
<camel:camelContext id="camelContext">
<camel:route>
<camel:from uri="activiti:camelprocess:servicetask1"/>
<camel:to uri="bean:serviceActivator?method=doSomething(${body})"/>
<camel:to uri="activiti:camelprocess:receivetask1"/>
</camel:route>
</camel:camelContext>
Error is:
1|ERROR|org.slf4j.helpers.MarkerIgnoringBase:161||||>> Failed delivery for (MessageId: ID-viscx73-PC-49557-1376961951564-0-1 on ExchangeId: ID-viscx73-PC-49557-1376961951564-0-2). Exhausted after delivery attempt: 1 caught: org.activiti.engine.ActivitiIllegalArgumentException: Business key is null
at org.activiti.engine.impl.ProcessInstanceQueryImpl.processInstanceBusinessKey(ProcessInstanceQueryImpl.java:87)
at org.activiti.camel.ActivitiProducer.findProcessInstanceId(ActivitiProducer.java:78)
at org.activiti.camel.ActivitiProducer.signal(ActivitiProducer.java:58)
at org.activiti.camel.ActivitiProducer.process(ActivitiProducer.java:49)
at org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process (AsyncProcessorConverterHelper.java:61)
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73)
All forums/links that has ACTIVITI->CAMELROUTE(FILE) then
in other route CAMEL_FILE->RECIEVETASK
And they suggest to add some key like PROCESS_KEY_PROPERTY or PROCESS_ID_PROPERTY
I don't get where these properties fit into
I am trying to work it from example at link
http://bpmn20inaction.blogspot.in/2013/03/using-camel-routes-in-activiti-made.html
I am not sure whether process after giving service task to camel, is not moving at all to receive task and waiting up there or CAMEL is unable to find receive task
Please share some suggestion on this
Thanks
It worked by adding inbuilt camel queues as shown in the example. I thought they were just shown as example for various routes. But by passing to queue actually the ServiceTask was made asynchronous in camel and later from queue they were read and invoked the receive task in activiti
<camel:to uri="seda:tempQueue"/>
<camel:from uri="seda:tempQueue"/>
Thanks
I don't know whether you'd solved the problem or not, but actually I faced the same problem.
And finally, I found a solution of the problem.
In fact, it is correct that PROCESS_ID_PROPERTY property must be provided otherwise the activiti engine doesn't know to execute which process instance. So, I just set PROCESS_ID_PROPERTY value in the header when sending the JMS to activemq, and when the message back, just set the propertiy from header. Something likes:
from("activiti:process:simpleCall").setHeader("PROCESS_ID_PROPERTY", simple("${property.PROCESS_ID_PROPERTY}")).to("activemq:queue:request");
from("activemq:queue:reply").setProperty("PROCESS_ID_PROPERTY", simple("${header.PROCESS_ID_PROPERTY}")).to("activiti:process:simpleReceive");
Hope it will help you.

How to configure 2.6 spring: Failed to create route route2 at:

I'm trying to upgrade from Camel 2.0 to 2.6
I have this in my applicationContext-camel.xml file...
<camel:route >
<camel:from uri="transactionSaleBuffer" />
<camel:policy ref="routeTransactionPolicy"/>
<camel:transacted ref="transactionManagerETL" />
<camel:to uri="detailFactProcessor" />
</camel:route>
by adding in the two lines in the middle (policy and transacted) I get the exception...
Caused by: org.apache.camel.FailedToCreateRouteException: Failed to create route route2 at: >>> From[transactionSaleBuffer] <<< in route: Route[[From[transactionSaleBuffer]] -> [Tr
ansacted[ref:trans... because of Route route2 has no output processors. You need to add outputs to the route such as to("log:foo").
I can see this is because the Camel class RouteDefinition.java makes a call to ProcessorDefinitionHelper.hasOutputs(outputs, true).
This passes in an array of one Object ([Transacted[ref:transactionManagerETL]])
This one object has one two children
[Transacted[ref:transactionManagerETL]]
CHILD-[Policy[ref:routeTransactionPolicy],
CHILD-To[detailFactProcessor]
The Policy child has no outputs, so the exception is thrown.
Yet I don't know how to add a child, my XML above matches the schema.
Maybe I'm missing something else?
My setup matches the example...Apache Camel: Book in One Page (See section: Camel 1.x - JMS Sample)
Can anyone please help me out.
Thanks!
Jeff Porter
Try as follows
<camel:route>
<camel:from uri="transactionSaleBuffer" />
<camel:transacted ref="transactionManagerETL" />
<camel:policy ref="routeTransactionPolicy">
<camel:to uri="detailFactProcessor" />
</camel:policy>
</camel:route>

Categories