I have a simple ftp outbound gateway configured for ls, get, rm
<int-ftp:outbound-gateway id="gatewayLS"
cache-sessions="false"
session-factory="incomingCachingSessionFactory"
request-channel="inboundChannel"
command="ls"
command-options="-1"
expression="'${ftp.pull.remote.directory}'"
reply-channel="toSplitter" />
<channel id="toSplitter">
<interceptors>
<wire-tap channel="logger" />
</interceptors>
</channel>
<logging-channel-adapter id="logger"
log-full-message="true" level="DEBUG" />
<splitter id="splitter" input-channel="toSplitter"
output-channel="toGet" />
<int-ftp:outbound-gateway id="gatewayGET"
cache-sessions="false"
local-directory="${ftp.pull.local.directory}"
session-factory="incomingCachingSessionFactory"
request-channel="toGet"
reply-channel="downloadedFileChannel"
command="get"
command-options="-P"
expression="headers['file_remoteDirectory'] + '/' + payload" />
this works perfectly on my development windows box, connecting to multiple different FTP servers.
and my log outputs expected
DEBUG: org.springframework.integration.ftp.session.DefaultFtpSessionFactory - Connected to server [ftp.domain.com:21]
DEBUG: org.springframework.integration.ftp.gateway.FtpOutboundGateway - handler 'org.springframework.integration.ftp.gateway.FtpOutboundGateway#0' sending reply Message: [Payload=[test_file.zip]][Headers={timestamp=1343143242030, id=56758ef9-57e5-43d6-b8b7-c36539d9fd0d, file_remoteDirectory=/images/}]
the time bewteen the "Connected to server" and "sending reply Message" is also pretty much instantaneous
however, once deployed to 2x Centos servers (5.8 and 6.2), the reply channel of the LS is always emtpy
e.g.
DEBUG: org.springframework.integration.ftp.session.DefaultFtpSessionFactory - Connected to server [ftp.domain.com:21]
DEBUG: org.springframework.integration.ftp.gateway.FtpOutboundGateway - handler 'org.springframework.integration.ftp.gateway.FtpOutboundGateway#0' sending reply Message: [Payload=[]][Headers={timestamp=1343143961046, id=31759d6f-201e-4028-8943-0a68ae64db81, file_remoteDirectory=/images/}]
the time between the "Connected to server" and "sending reply Message" is also abnormally long.
some more info:
all 3 machines are using maven to build the WAR from the same code base.
the 2 centos machines are in different datacentres
I have tried multiple different unrelated FTP servers, with the same results
there are definately files on the ftp server
the ftp servers are accessable on the centos boxes, confirmed using "ncftp" and "ftp"
I know this post is pretty vague, but it is driving me nuts. any thoughts greatly appreciated
I had to set <property name="clientMode" value="2" /> on my DefaultFtpSessionFactory
Related
From last few Days we are facing some problem because of Firefox 93(latest version) protects against Insecure Downloads and due to this
Mozilla Firefox block insecure HTTP downloads on a secure HTTPS page,
so we can not download our report which was based on HTTP(without SSL) from our Production Site which is based on SSL Certified HTTPS
Here is I have shown how our report is generated from our Production site
whereas the user sending the request to fetch or downloading the report that time one request goes to the report Server via Java Code and then the report server gives a response for report downloading or fetching a reports
sample response URL Report Server: http//report.abc.com/mycertificate.doc
so whenever we request a downloading a report at that time we get security-related warning from the Mozilla browser while downloading the report
We try some tackle points as follows:
we were trying to make the report server SSL enable but did not work for us because our report server is based on windows server 2003 so all supports from this windows server its almost not good enough for making SSL enable.
2)Reverse Proxy: we also used the concept of reverse proxy in our project which was deployed on JBoss (Wildfly version 9) for that we did configure some code in the standalone.xml file which follows:
Here is the some configuration changes which was we made but no luck for this:
<subsystem xmlns="urn:jboss:domain:undertow:2.0">
<buffer-cache name="default"/>
<server name="default-server">
<http-listener name="default" socket-binding="http" redirect-socket="https"/>
<host name="default-host" alias="localhost">
<location name="/" handler="welcome-content"/>
<location name="/myservices/services" handler="myproxy"/>
</host>
</server>
<handlers>
<file name="welcome-content" path="${jboss.home.dir}/welcome-content"/>
<reverse-proxy name="myproxy">
<host name="http://www.example.com" instance-id="myRoute" outbound-socket-binding="https-remote"/>
</reverse-proxy>
</handlers>
</subsystem>
<outbound-socket-binding name="https-remote">
<remote-destination host="http://www.example.com" port="${jboss.https.port:8443}"/>
</outbound-socket-binding>
above code snippet I have made changes to the JBoss standalone configuration file but did not get help out from this configuration setting so, What I am looking for is a reverse proxy in Wildfly 9 to handle insecure HTTP downloads on a secure HTTPS page
Any help would be appreciated.
Thanks
We've recently migrated a Spring REST application from Wildfly 15.0.1.Final to Wildfly 21.0.0.Final which apparently introduced an issue with GET requests: whenever we have a | (pipe) character in the query parameter string of the GET request, the request returns no response and we get ERR_HTTP2_PROTOCOL_ERROR.
I know that '|' (pipe) character is unsafe according to the RFC1738 specification of HTTP, while RFC3986 allows for the encoding of Unicode characters.
I would like this to keep working though, as we have external clients sending requests with | character in the query parameter, and currently if we would move to the current Wildfly 21 config, those requests would fail.
The same configuration was working fine on Wildfly 15.0.1.Final.
I have these in standalone.xml with no avail:
<system-properties>
<property name="org.apache.catalina.connector.URI_ENCODING" value="UTF-8"/>
<property name="org.apache.catalina.connector.USE_BODY_ENCODING_FOR_QUERY_STRING" value="true"/>
</system-properties>
<http-listener name="default" socket-binding="http" allow-unescaped-characters-in-url="true" redirect-socket="https" enable-http2="true" url-charset="UTF-8" />
<https-listener name="https" socket-binding="https" max-post-size="1048576000" allow-unescaped-characters-in-url="true" ssl-context="LocalhostSslContext" enable-http2="true" url-charset="UTF-8" />
...and this in standalone.conf.bat:
set "JAVA_OPTS=%JAVA_OPTS% -Dorg.apache.catalina.connector.URI_ENCODING=UTF-8"
The very same code on the very same VM, with (migrated) config works fine on Wildfly 15.0.1.Final but throws the ERR_HTTP2_PROTOCOL_ERROR in Wildfly 21.0.0.Final whenever I have a | in the request. In these cases it looks like the request is not even hitting my breakpoints.
I can programmatically do a dirty fix by URL encoding all | in our $.ajaxSetup, but this only fixes requests originating from the server itself, and not requests that are coming externally with | in their GET request query params.
The dirty (and insufficient) fix:
$.ajaxSetup({
beforeSend: function (jqXHR, settings) {
settings.url = settings.url.replace(/\|\|/g, "%7C%7C");
}
});
Has anyone encountered this issue?
Full standalone.xml (with sensitivre info masked) here.
EDIT: In the meantime I noticed that this issue only happens when I hit endpoints defined in Windows hosts file. When I go through our company's load balancer, it works fine.
So e.g. http://localhost.myproduct.com is not working from SERVER1 if 127.0.0.1 localhost.myproduct.com is in hosts file, but https://server1.myproduct.com that hits the very same server works fine, if the endpoint is routed through the load balancer.
I saw a few related postings around this time, all of which seem to have gone unanswered.
I've also encountered a similar issue with Wildfly 23.0.0.Final, which was a problem with http/2 handling - there is a fix for that: UndertowOptions.ALLOW_UNESCAPED_CHARACTERS_IN_URL has no effect for HTTP/2, but as of this reply AFAIK is not yet released in a Wildfly build.
Setting enable-http2="false" on the listeners - while not ideal - worked around the problem for me.
It could be that your load balancer is doing http/1.1 on the backend which would be why you don't encounter the problem when routing through it.
Tried accessing the RabbitMQ management page on localhost:5672 and the connection is being refused. I have reinstalled RabbitMQ via Homebrew and still running into the same problem. I ran rabbitmq-server after the reinstallation and got back this prompt:
## ## RabbitMQ 3.8.1
## ##
########## Copyright (c) 2007-2019 Pivotal Software, Inc.
###### ##
########## Licensed under the MPL 1.1. Website: https://rabbitmq.com
Doc guides: https://rabbitmq.com/documentation.html
Support: https://rabbitmq.com/contact.html
Tutorials: https://rabbitmq.com/getstarted.html
Monitoring: https://rabbitmq.com/monitoring.html
Logs: /usr/local/var/log/rabbitmq/rabbit#localhost.log
/usr/local/var/log/rabbitmq/rabbit#localhost_upgrade.log
Config file(s): (none)
Starting broker... completed with 6 plugins.
Not sure why I cant access the management page via the default port. I had a few applications using RabbitMQ running and none of them are working now. What is the best way to completely uninstall RabbitMQ from a Mac so that I can run a clean install?
I think you should have to enable management plugin as stated in rabbitmq documention:
The management plugin is included in the RabbitMQ distribution. Like any other plugin, it must be enabled before it can be used.
Just go to your rabbitmq installation directory (example path /usr/save/rabbitmq_server-x.x.x/sbin) and run following command:
rabbitmq-plugins enable rabbitmq_management
After this if rabbitmq management still not accessible try to stop and restart rabbitmq server.
Here are reference link:
Rabbitmq documention on management plugin
Rabbitmq different networking ports information
To answer these:
Not sure why I cant access the management page via the default port.
Still cant access localhost:5762 after starting RabbitMQ server
If the rabbitmqctl status / rabbitmq-diagnostics status command shows a listener like this:
Interface: [::], port: 15672, protocol: http, purpose: HTTP API
then RabbitMQ might be setup correctly.
Probable cause: Http redirection
The issue might rather be with the URL that's visited. Chrome could be set to redirect HTTP to HTTPS. If this is so, and you don't have an HTTPS listener setup, you'd see a ERR_SSL_PROTOCOL_ERROR.
To get around this, you can disable redirection on Chrome only for localhost. By doing so, http://localhost:15672 will no longer be redirected to https://localhost:15672 and the management web client will therefore be visible.
How to disable HTTP redirection for a domain in Chrome
Visit chrome://net-internals/#hsts
Delete domain security policies for the domain (in this case simply enter localhost)
Click the Delete button
I'm in the process of creating a simple Mule flow in Anypoint Studio - it polls a directory periodically, and when a file is placed in the directory it sends it to an SFTP server. However, when the application starts negotiating a secure connection with the server, it fails with this error:
java.io.IOException: Error during login to username#host:
Session.connect: java.security.InvalidAlgorithmParameterException: DH
key size must be multiple of 64, and can only range from 512 to 8192
(inclusive). The specific key size 2047 is not supported
The stack trace references several files from the jsch library. The solutions in previous questions recommended upgrading to Java 8, using a different version of jsch, or editing the jsch jars themselves. My Mule server (version 3.9.0 EE) is already on Java 8, I've tried a few different versions of jsch, and editing the jars is not practical, since this application will be deployed to a few different environments.
I'm able to log in to the sftp server using the same credentials as the application via WinSCP. A coworker has tried modifying a working flow to use the same credentials to move the same file, and they get the same error. Here is the XML of my flow:
<flow name="ClCoFlow">
<file:inbound-endpoint path="${file.from}"
moveToDirectory="${file.backup}" responseTimeout="10000"
doc:name="Get File to Transfer" />
<logger
message="#[flowVars.originalFilename] being moved to #[flowVars.moveToDirectory]"
level="INFO" doc:name="File In" />
<sftp:outbound-endpoint exchange-pattern="one-way"
host="${sftp.host}" port="${sftp.port}" path="${sftp.path}" user="${sftp.user}"
password="${sftp.password}" responseTimeout="10000" doc:name="SFTP" />
<logger message="#[flowVars.originalFilename] sent to sftp service"
level="INFO" doc:name="File sent" />
</flow>
Thanks in advance for any help you can provide
EDIT
Though Mule is built on Java, and Mule applications are built behind the scenes using Java and Spring, there is no writing of actual Java code involved in creating a Mule flow.
Changing the provider seems to be the way to go here. Unfortunately, there is no way to do so with Mule connectors, so we kind of have to re-write the sftp connector in plain Java. After downloading the bouncycastle .jars, put them in src/main/app/lib, then add them to the build path. You should be able to import them (for some reason I had to import org.python.bouncycastle.jce.provider rather than org.bouncycastle.jce.provider). At the top of my code I put :
Security.insertProviderAt(new BouncyCastleProvider(), 1);
and when the flow runs, the dh key is properly negotiated and no errors are thrown.
I have a condition where I should read the file from SFTP remote location with the following steps:
Read file from SFTP location say input folder.
Process file and call a REST API with the content of file.
If the call is successful move the remote SFTP file to archive
folder else move the remote SFTP file to error folder.
I am thinking of two approaches. I don't know which are possible.
First, read the file from SFTP remote location to local and delete it from remote. Then call the REST API. According to response of REST call, success or error upload the file to remote folder.
Second, read the file from SFTP remote location to local. Then call the REST API. According to response of REST call, success or error upload the file to remote folder.
Can anyone enlighten me which of the approach is possible and convenient? I would appreciate if you can mention the channel adapters as well.
So far I am able to call REST API.
<int-sftp:inbound-channel-adapter id="sftpInbondAdapter"
auto-startup="true"
channel="receiveChannel"
session-factory="sftpSessionFactory"
local-directory="${local.dir}"
remote-directory="${sftp.dir.input}"
auto-create-local-directory="true"
delete-remote-files="true"
filename-pattern="*.txt">
<int:poller fixed-rate="60000" max-messages-per-poll="1" />
</int-sftp:inbound-channel-adapter>
<int:channel id="receiveChannel"/>
<int:splitter input-channel="receiveChannel" output-channel="singleFile"/>
<int:channel id="singleFile"/>
<int:service-activator input-channel="singleFile"
ref="sftpFileListenerImpl" method="processMessage" output-channel="costUpdate" />
<int:channel id="costUpdate" />
<int:header-enricher input-channel="costUpdate" output-channel="headerEnriched">
<int:header name="content-type" value="application/json" />
</int:header-enricher>
<int:channel id="headerEnriched" />
<int-http:outbound-gateway
url="${cost.center.add.rest.api}" request-channel="headerEnriched"
http-method="POST" expected-response-type="java.lang.String" reply-channel="costAdded" >
</int-http:outbound-gateway>
<int:publish-subscribe-channel id="costAdded" />
I want to move the remote file to another location in remote folder once the API call is success after evaluating the API call response. My question is how do I move the remote file to another remote location based on the response of http:outbound-gateway?
See the retry-and-more sample, specifically the Expression Evaluating Advice Demo - it shows how to take different actions based on the success/failure of an upload.
EDIT:
In response to your comments on Artem's answer; Since you have delete-remote-files="true" there's no remote file to move; you must first set that to false.
Then, use the advice as I suggested to process success or failure - use an ftp:outbound-gateway and mv the remote file. See the gateway documentation for the mv command.