I have a condition where I should read the file from SFTP remote location with the following steps:
Read file from SFTP location say input folder.
Process file and call a REST API with the content of file.
If the call is successful move the remote SFTP file to archive
folder else move the remote SFTP file to error folder.
I am thinking of two approaches. I don't know which are possible.
First, read the file from SFTP remote location to local and delete it from remote. Then call the REST API. According to response of REST call, success or error upload the file to remote folder.
Second, read the file from SFTP remote location to local. Then call the REST API. According to response of REST call, success or error upload the file to remote folder.
Can anyone enlighten me which of the approach is possible and convenient? I would appreciate if you can mention the channel adapters as well.
So far I am able to call REST API.
<int-sftp:inbound-channel-adapter id="sftpInbondAdapter"
auto-startup="true"
channel="receiveChannel"
session-factory="sftpSessionFactory"
local-directory="${local.dir}"
remote-directory="${sftp.dir.input}"
auto-create-local-directory="true"
delete-remote-files="true"
filename-pattern="*.txt">
<int:poller fixed-rate="60000" max-messages-per-poll="1" />
</int-sftp:inbound-channel-adapter>
<int:channel id="receiveChannel"/>
<int:splitter input-channel="receiveChannel" output-channel="singleFile"/>
<int:channel id="singleFile"/>
<int:service-activator input-channel="singleFile"
ref="sftpFileListenerImpl" method="processMessage" output-channel="costUpdate" />
<int:channel id="costUpdate" />
<int:header-enricher input-channel="costUpdate" output-channel="headerEnriched">
<int:header name="content-type" value="application/json" />
</int:header-enricher>
<int:channel id="headerEnriched" />
<int-http:outbound-gateway
url="${cost.center.add.rest.api}" request-channel="headerEnriched"
http-method="POST" expected-response-type="java.lang.String" reply-channel="costAdded" >
</int-http:outbound-gateway>
<int:publish-subscribe-channel id="costAdded" />
I want to move the remote file to another location in remote folder once the API call is success after evaluating the API call response. My question is how do I move the remote file to another remote location based on the response of http:outbound-gateway?
See the retry-and-more sample, specifically the Expression Evaluating Advice Demo - it shows how to take different actions based on the success/failure of an upload.
EDIT:
In response to your comments on Artem's answer; Since you have delete-remote-files="true" there's no remote file to move; you must first set that to false.
Then, use the advice as I suggested to process success or failure - use an ftp:outbound-gateway and mv the remote file. See the gateway documentation for the mv command.
Related
We've recently migrated a Spring REST application from Wildfly 15.0.1.Final to Wildfly 21.0.0.Final which apparently introduced an issue with GET requests: whenever we have a | (pipe) character in the query parameter string of the GET request, the request returns no response and we get ERR_HTTP2_PROTOCOL_ERROR.
I know that '|' (pipe) character is unsafe according to the RFC1738 specification of HTTP, while RFC3986 allows for the encoding of Unicode characters.
I would like this to keep working though, as we have external clients sending requests with | character in the query parameter, and currently if we would move to the current Wildfly 21 config, those requests would fail.
The same configuration was working fine on Wildfly 15.0.1.Final.
I have these in standalone.xml with no avail:
<system-properties>
<property name="org.apache.catalina.connector.URI_ENCODING" value="UTF-8"/>
<property name="org.apache.catalina.connector.USE_BODY_ENCODING_FOR_QUERY_STRING" value="true"/>
</system-properties>
<http-listener name="default" socket-binding="http" allow-unescaped-characters-in-url="true" redirect-socket="https" enable-http2="true" url-charset="UTF-8" />
<https-listener name="https" socket-binding="https" max-post-size="1048576000" allow-unescaped-characters-in-url="true" ssl-context="LocalhostSslContext" enable-http2="true" url-charset="UTF-8" />
...and this in standalone.conf.bat:
set "JAVA_OPTS=%JAVA_OPTS% -Dorg.apache.catalina.connector.URI_ENCODING=UTF-8"
The very same code on the very same VM, with (migrated) config works fine on Wildfly 15.0.1.Final but throws the ERR_HTTP2_PROTOCOL_ERROR in Wildfly 21.0.0.Final whenever I have a | in the request. In these cases it looks like the request is not even hitting my breakpoints.
I can programmatically do a dirty fix by URL encoding all | in our $.ajaxSetup, but this only fixes requests originating from the server itself, and not requests that are coming externally with | in their GET request query params.
The dirty (and insufficient) fix:
$.ajaxSetup({
beforeSend: function (jqXHR, settings) {
settings.url = settings.url.replace(/\|\|/g, "%7C%7C");
}
});
Has anyone encountered this issue?
Full standalone.xml (with sensitivre info masked) here.
EDIT: In the meantime I noticed that this issue only happens when I hit endpoints defined in Windows hosts file. When I go through our company's load balancer, it works fine.
So e.g. http://localhost.myproduct.com is not working from SERVER1 if 127.0.0.1 localhost.myproduct.com is in hosts file, but https://server1.myproduct.com that hits the very same server works fine, if the endpoint is routed through the load balancer.
I saw a few related postings around this time, all of which seem to have gone unanswered.
I've also encountered a similar issue with Wildfly 23.0.0.Final, which was a problem with http/2 handling - there is a fix for that: UndertowOptions.ALLOW_UNESCAPED_CHARACTERS_IN_URL has no effect for HTTP/2, but as of this reply AFAIK is not yet released in a Wildfly build.
Setting enable-http2="false" on the listeners - while not ideal - worked around the problem for me.
It could be that your load balancer is doing http/1.1 on the backend which would be why you don't encounter the problem when routing through it.
I'm in the process of creating a simple Mule flow in Anypoint Studio - it polls a directory periodically, and when a file is placed in the directory it sends it to an SFTP server. However, when the application starts negotiating a secure connection with the server, it fails with this error:
java.io.IOException: Error during login to username#host:
Session.connect: java.security.InvalidAlgorithmParameterException: DH
key size must be multiple of 64, and can only range from 512 to 8192
(inclusive). The specific key size 2047 is not supported
The stack trace references several files from the jsch library. The solutions in previous questions recommended upgrading to Java 8, using a different version of jsch, or editing the jsch jars themselves. My Mule server (version 3.9.0 EE) is already on Java 8, I've tried a few different versions of jsch, and editing the jars is not practical, since this application will be deployed to a few different environments.
I'm able to log in to the sftp server using the same credentials as the application via WinSCP. A coworker has tried modifying a working flow to use the same credentials to move the same file, and they get the same error. Here is the XML of my flow:
<flow name="ClCoFlow">
<file:inbound-endpoint path="${file.from}"
moveToDirectory="${file.backup}" responseTimeout="10000"
doc:name="Get File to Transfer" />
<logger
message="#[flowVars.originalFilename] being moved to #[flowVars.moveToDirectory]"
level="INFO" doc:name="File In" />
<sftp:outbound-endpoint exchange-pattern="one-way"
host="${sftp.host}" port="${sftp.port}" path="${sftp.path}" user="${sftp.user}"
password="${sftp.password}" responseTimeout="10000" doc:name="SFTP" />
<logger message="#[flowVars.originalFilename] sent to sftp service"
level="INFO" doc:name="File sent" />
</flow>
Thanks in advance for any help you can provide
EDIT
Though Mule is built on Java, and Mule applications are built behind the scenes using Java and Spring, there is no writing of actual Java code involved in creating a Mule flow.
Changing the provider seems to be the way to go here. Unfortunately, there is no way to do so with Mule connectors, so we kind of have to re-write the sftp connector in plain Java. After downloading the bouncycastle .jars, put them in src/main/app/lib, then add them to the build path. You should be able to import them (for some reason I had to import org.python.bouncycastle.jce.provider rather than org.bouncycastle.jce.provider). At the top of my code I put :
Security.insertProviderAt(new BouncyCastleProvider(), 1);
and when the flow runs, the dh key is properly negotiated and no errors are thrown.
I have a server that I wish to connect to via SSL, and then listen for data. I have a Camel route set up (via Spring) with a Netty4 endpoint as follows:
<camel:endpoint id="sslEndpoint" uri="netty4:tcp://{{server.host}}:{{server.port}}">
<camel:property key="clientMode" value="true" />
<camel:property key="needClientAuth" value="true" />
<camel:property key="sync" value="false" />
<camel:property key="ssl" value="true" />
<camel:property key="keyStoreResource" value="file:{{server.keystore}}" />
<camel:property key="trustStoreResource" value="file:{{server.truststore}}" />
<camel:property key="passphrase" value="{{server.passphrase}}" />
</camel:endpoint>
The route is configured in Java with this endpoint as the from part of the route:
public class MyRoute extends RouteBuilder {
#Override
public void configure() {
from("ref:sslEndpoint")
.to("log:MyLog?level=DEBUG");
}
}
By default a from endpoint will create a NettyConsumer, which acts as a server, hence specifying clientMode=true on the endpoint. This is honoured when using a plain TCP connection (it does indeed connect as a client, and receive data sent to it from the server). However, when using SSL it doesn't start off the SSL Handshake, meaning the server doesn't send out any data.
I have rooted through the Camel Netty4 code, and the key issue is in DefaultServerInitializerFactory where a new SSL Connection is configured - the SSLEngine has a hard-coded setUseClientMode(false). Sticking a breakpoint here and changing the call to true does indeed cause Netty to connect to the server, initiate the SSL handshake, and start consuming received data.
So my question is twofold:
How can I best resolve this issue and make the SSL Client initiate a handshake? Have I just missed something obvious?
Is this a bug in Camel/Netty4, as it would appear to me that the SSL connection should honour the clientMode property of the endpoint?
I have a spring core application with config below.
I'm using UserCredentialsConnectionFactoryAdapter, MQQueueConnectionFactory and jms-listener.
<jms:listener-container container-type="default"
connection-factory="userConnectionFactory" acknowledge="auto">
<jms:listener destination="${QUEUE_NAME_IN_GEN}" ref="messageListener"
method="onMessage" />
</jms:listener-container>
<bean id="userConnectionFactory"
class="org.springframework.jms.connection.UserCredentialsConnectionFactoryAdapter">
<property name="targetConnectionFactory">
<ref bean="mqConnectionFactory" />
</property>
<property name="username" value="${MQ_USER_ID}" />
</bean>
<bean id="mqConnectionFactory" class="com.ibm.mq.jms.MQQueueConnectionFactory">
<property name="hostName">
<value>${MQ_HOST_NAME}</value>
</property>
<property name="port">
<value>${MQ_PORT}</value>
</property>
<property name="queueManager">
<value>${QUEUE_MANAGER}</value>
</property>
<property name="transportType">
<value>1</value>
</property>
</bean>
on application startup, listener stars perfectly one one machine.
when I try with same artifacts on a different server, listener fails to start with following error:
[org.springframework.jms.listener.DefaultMessageListenerContainer#0-1] ERROR org.springframework.jms.listener.DefaultMessageListenerContainer.refreshConnectionUntilSuccessful(DefaultMessageListenerContainer.java:909) - Could not refresh JMS Connection for destination 'R.ABCDEF' - retrying in 5000 ms. Cause: MQJMS2005: failed to create MQQueueManager for 'myhost:dev'; nested exception is com.ibm.mq.MQException: MQJE001: An MQException occurred: Completion Code 2, Reason 2058
MQJE036: Queue manager rejected connection attempt
To figure out if it's something to do with Unix account privilege issue on the second server, I wrote a simple MQ Client application. This program can connect to this queue manager and read messages from it.
What else could be wrong?
A 2058 suggests that the queue manager name is incorrect. According to the technote from IBM that's the most common cause, however there are others.
The following extract is taken from this technote:
Ensure that the queue manager name is specified correctly on:
MQCONN API calls
QREMOTE object definitions
Client connection channel definitions
Debugging QCF, TCF, or Client connection problems are much more complex.
Ensure that the connection request is routed to the intended machine and queue manager.
Verify that the listener program is starting the channel on the correct queue manager.
Ensure that the specifications for the client environment variables are correct.
mqserver
mqchllib
mqchltab
If you are using a client channel table (amqclchl.tab), then verify that your client connection channel definition has the correct queue manager name (QMNAME) specified.
Specify the correct queue manager name.
Correct channel routing problems.
Correct inetd listener configuration problems.
Correct client related configuration problems.
I have a simple ftp outbound gateway configured for ls, get, rm
<int-ftp:outbound-gateway id="gatewayLS"
cache-sessions="false"
session-factory="incomingCachingSessionFactory"
request-channel="inboundChannel"
command="ls"
command-options="-1"
expression="'${ftp.pull.remote.directory}'"
reply-channel="toSplitter" />
<channel id="toSplitter">
<interceptors>
<wire-tap channel="logger" />
</interceptors>
</channel>
<logging-channel-adapter id="logger"
log-full-message="true" level="DEBUG" />
<splitter id="splitter" input-channel="toSplitter"
output-channel="toGet" />
<int-ftp:outbound-gateway id="gatewayGET"
cache-sessions="false"
local-directory="${ftp.pull.local.directory}"
session-factory="incomingCachingSessionFactory"
request-channel="toGet"
reply-channel="downloadedFileChannel"
command="get"
command-options="-P"
expression="headers['file_remoteDirectory'] + '/' + payload" />
this works perfectly on my development windows box, connecting to multiple different FTP servers.
and my log outputs expected
DEBUG: org.springframework.integration.ftp.session.DefaultFtpSessionFactory - Connected to server [ftp.domain.com:21]
DEBUG: org.springframework.integration.ftp.gateway.FtpOutboundGateway - handler 'org.springframework.integration.ftp.gateway.FtpOutboundGateway#0' sending reply Message: [Payload=[test_file.zip]][Headers={timestamp=1343143242030, id=56758ef9-57e5-43d6-b8b7-c36539d9fd0d, file_remoteDirectory=/images/}]
the time bewteen the "Connected to server" and "sending reply Message" is also pretty much instantaneous
however, once deployed to 2x Centos servers (5.8 and 6.2), the reply channel of the LS is always emtpy
e.g.
DEBUG: org.springframework.integration.ftp.session.DefaultFtpSessionFactory - Connected to server [ftp.domain.com:21]
DEBUG: org.springframework.integration.ftp.gateway.FtpOutboundGateway - handler 'org.springframework.integration.ftp.gateway.FtpOutboundGateway#0' sending reply Message: [Payload=[]][Headers={timestamp=1343143961046, id=31759d6f-201e-4028-8943-0a68ae64db81, file_remoteDirectory=/images/}]
the time between the "Connected to server" and "sending reply Message" is also abnormally long.
some more info:
all 3 machines are using maven to build the WAR from the same code base.
the 2 centos machines are in different datacentres
I have tried multiple different unrelated FTP servers, with the same results
there are definately files on the ftp server
the ftp servers are accessable on the centos boxes, confirmed using "ncftp" and "ftp"
I know this post is pretty vague, but it is driving me nuts. any thoughts greatly appreciated
I had to set <property name="clientMode" value="2" /> on my DefaultFtpSessionFactory