FTPClient Pool - Java - java

I am writing a Rest Service that connects to an FTP server to read some files, then do some operations over the read data to serve the service request. I am using Apache commons FTPClient.
As a temporary solution, I am creating an FTPClient object - then connecting it - and then logging in with the credentials - inside a method (the client is local to this method - doing this as FTPClient is not thread safe) in my data access layer and then disconnecting it before coming out of the methods(ie.. after reading the file). The issue is, FTPClient is taking some 3-7 seconds for logging in which is very high. So I am thinking of implementing an FTPClientPool that can provide an already prepared client in the data access method.
Do any such ClientPools already exist?
If yes, then what one should I opt for?
If no, the difficulty in implementing is once created and connected, How long does an apache FTPClient stay alive? for infinite time?? (what I mean is what is the default keep alive time for an FTPClient - idle time after which client gets disconnected - coz I see various kind of times in the java docs. :( ) And next questions is How do you keep it alive always?? (may be sending the NOOPS after regular intervals in a separate thread??) Any kind of help regarding how should I move forward is really helpful.
Thanks & Regards

Idle timeout for clients is generally determined server side.
Here's some of the more non obvious client parameters:
soTimeout - Determines how long the client blocks waiting for a message. Generally you poll a socket every so often and this determines how long you wait during a poll.
soLinger - Determines how long to keep the connection after close() has been called.
From my experience of using FTP, they normally just reconnect if the connection closes - it's not normally vital to have a constant uninterrupted connection unlike in other applications.
What are you using FTP for - it's normally not that time critical a service ...

As for ClientPools, I happened to write a demo project.
commons-pool-ftp
I am getting a little bit annoyed by the ftp protocol,
in our experience, it would meet broken pipe when testing on the client that just getting from the pool.
testOnBorrow=true

Configure
protected static ThreadLocal<FTPClient> ftpClientContainer = new
ThreadLocal<>();
Then use:
//login() will be your login method to FTP:
ftpClientContainer.set(ftpLogin());
Then in each method add:
FTPClient ftpClient = ftpClientContainer.get();
and finely when done:
//ftpDisconnect () will be your disconnect method to FTP:
ftpDisconnect(ftpClientContainer.get());

Related

send keep alive on long asynchronous request in spring server

I have a controller in spring which getting a POST request which is handling as asynchronous(using DeferredResult object as a return value).
The response for this request is writing bytes to the HTTP stream directly (HttpServletResponse.getWriter().print()) and when it's done writing it sets result on the DeferredResult object for close the connection.
I'm writing my response in stream chunks.
I have an issue in this request handling because the client is closing the connection if I'm not writing to it for 1 minute. (I can write some chunks and then stop writing for 1 minute - therefore the connection will be closed in the middle of my procedure).
I want to control the closing connection procedure - I want to send keep alive when I'm not writing any data to the stream so that the connection won't be closed until I decided to close it from the server-side.
I didn't find out how should I get control of the connection from the controller in the server.
Please assist.
Thanks.
There is no such thing as a "keep alive" during an ongoing request or response in HTTP which can help with idle timeouts when receiving a request or response.
HTTP keep alive is only about keeping the TCP connection open after a response in order to process more requests on the same connection. TCP keep alive is instead used to detect connection loss without TCP shutdown and can also be used to prevent idle timeouts in stateful packet filters (as used in firewalls or NAT routers) in between client and server. It does not prevent idle timeouts at the application level though since it does not transport any data visible to the application level.
Note that the way you want to use HTTP is contrary to how HTTP was designed originally. It was designed for a client sending a full request and the server sending a full response immediately and not for the server sending some parts of the response, idling some time and then send some more. The proper way to implement such behavior would be by using WebSockets. With WebSockets both client and server can send new messages at any time (i.e. no request-response schema) and it also supports keep-alive messages. If WebSockets are not an option you can instead implement a polling client which regularly polls for new data from the server with a new request.
I ran into similar need just recently. The server code executes a long running operation that can take as long as 30 minutes to return, and the client times out long before that. The solution was to have the long running operation send periodic "keep alive" packets of data to the client via a "callback" argument provided by the request handler method. The callback is nothing more than a function (think of Lambda in Java) that takes as parameter the "keep alive" data packet to send to client, and then writes that data packet to the client via the java.io.PrintWriter reference that you can get off of javax.servlet.http.HttpServletResponse. Below code is the handler method that does this. I had to refactor the code in the call hierarchy to accept this new "callback" parameter until the "callback" can reach the method that is performing the long running operation, and inside that code I invoke the "callback" every so often, for example every time 10 records are processed. Not that below is Groovy code (scripting code on top of Java that runs on the JVM) and the server-side framework is Spring,
...
#Autowired
DataImporter dataImporter
#PostMapping("/my/endpoint")
void importData(#RequestBody MyDto myDto, HttpServletResponse response) {
// Callback to allow servant code deep in the call hierarchy to report back to client any arbitrary message
Closure<Void> callback = { String str ->
response.writer.print str
response.writer.flush()
}
// This leads to the code that is performing a long running operation. Using
// this "hook" that code has a direct connection to the client whereby
// it can send packets of data to keep the connection from timing out.
dataImporter.importData(myDto, callback)
}
}

How to set a time limit for ServerSocket to wait for the Client Socket to get connected

I am programming in java using Sockets and facing this below metioned issue.
I programmed a Server program which sends a plain text to the client side.When I start the server the server program waits for a long time for the client socket to get connected.But i want the server program to wait only for a specified time say 5 minutes and then report the user that the Client is not connected within the Specified time.
I am unable to understand how to implement it.I have gone through Timer and TimerTask classes but its a bit confusing.
Use ServerSocket.setSoTimeout(int timeOut) to wait for the client. Set it to 0 if you need to wait infinitely.
Note: Java doc says:
The option must be enabled prior to entering the blocking operation to
have effect.
As java API doc describes 'Socket.accept()':
Throws:
IOException - if an I/O error occurs when waiting for a connection.
SecurityException - if a security manager exists and its checkAccept method doesn't allow the operation.
SocketTimeoutException - if a timeout was previously set with setSoTimeout and the timeout has been reached.
See
Java API Doc Socket

How to set Java NIO AsynchronousSocketChannel connect timeout

Looking at JDK 1.7 API. I cannot seem to be able to set a connection timeout on an AsynchonousSocketChannel. Is there anyway I can setup a connection timeout on such a channel?
Thanks.
The answer is: you can't.
The first thing to understand is how a TCP connect works. The kernel is sending SYN packets, backing off the time between each retry. This can be tuned via kernel parameters. An article covering this in detail (for linux) can be found here
To give you an idea of what's involved to implement your own shorter timeout for a socket connect is to put the socket in nonblocking mode, throw it in a select() with a timeout, then use getsockopt() to see what happened. This StackOverflow answer shows how this works.
With NIO.2, the connection process is handled for you using threads that you don't have access to. Unfortunately there's no way to tell it you want a shorter timeout on connects; it just calls your completion handler / notifies the Future when the connection either succeeds of fails (including timing out).
You do have the option of calling get(timeout, unit) on the returned Future, then cancelling the Futureif it times out ... but that means if you want the connect to be async you have to add another layer of threading / callbacks and may as well just implement your own async thing with nio.
One last thing worth mentioning since you're looking at async network stuff is that Netty does give this to you (using Nio):
Bootstrap bootstrap = new Bootstrap()
.group(new NioEventLoopGroup())
.channel(NioSocketChannel.class)
.remoteAddress(new InetSocketAddress(remoteAddress, port))
.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, connectionTimeout);
ChannelFuture f = bootstrap.connect();
And you can register a listener with that ChannelFuture for a callback.

How to transfer data from one jsp to another jsp with sockets

I have 3 .jsp's. The first one asks the user for their username. Once the form is submitted it is taken to a 2nd jsp where a unique passcode is created for the user. How would I go about taking this passcode and passing it to a 3rd jsp using a socket?
You can use java.net.URL and java.net.URLConnection to fire and handle HTTP requests programmatically. They make use of sockets under the covers and this way you don't need to fiddle with low level details about the HTTP protocol. You can pass parameters as query string in the URL.
String url = "http://localhost:8080/context/3rd.jsp?passcode=" + URLEncoder.encode(passcode, "UTF-8");
InputStream input = new URL(url).openStream();
// ... (read it, it contains the response)
This way the passcode request parameter is available in the 3rd JSP by ${param.passcode} or request.getParameter("passcode") the usual way.
Better is however to just include that 3rd JSP in your 2nd JSP.
request.setAttribute("passcode", passcode);
request.getRequestDispatcher("3rd.jsp").include(request, response);
This way the passcode is available as request attribute in the 3rd JSP by ${passcode} or request.getAttribute("passcode") the usual way.
See also:
Using java.net.URLConnection to fire and handle HTTP requests
Unrelated to the concrete question, this is however a terribly nasty hack and the purpose of this is beyond me. There's somewhere a serious design flaw in your application. Most likely those JSPs are tight coupled with business logic which actually belongs in normal and reuseable Java classes like servlets and/or EJBs and/or JAX-WS/RS which you just import and call in your Java class the usual Java way. JSPs are meant to generate and send HTML, not to act as business services, let alone web services. See also How to avoid Java code in JSP files?
So, you want the username to be submitted from the first JSP to the second, by submitting a form to the second, right?
But, for interaction between the second and third, you want to avoid using the communication mechanisms behind the the JSP files and use your own, right?
Well, how you might implement doing this depends on where you're sending your communication from and to. For instance, are they on the same machine, or on different machines?
Generally speaking, you'll need a client-server type of relationship to be set up here. I imagine that you would want your third JSP to act as the server.
What the third JSP will do is will sit and wait for a client to try to communicate with it. But, before you can do that, you'll first need to bind a port to your application. Ports are allocated by the Operating System and are given to requesting processes.
When trying to implement this in Java, you might want to try something like the following:
int port_number = 1080;
ServerSocket server = new ServerSocket(port_number);
In the above example, the ServerSocket is already bound to the specified port 1080. It doesn't have to be 1080 - 1080 is just an example.
Next, you will want to listen and wait for a request to come in. You can implement this step in the following:
Socket request = null;
while((request = server.accept()) == null)
{}
This will cause the server socket to keep looping until it finally receives a request. When the request comes in, it will create a new Socket object to handle that request. So, you could come back to your loop later on and continue to wait and accept requests, while a child thread handles communication using your newly created request Socket.
But, for your project, I would guess that you don't need to communicate with more than one client at a time, so it's okay if we just simply stop listening once we receive a request, I suppose.
So, now onto the client application. Here, it's a little bit different from what we had with the server. First off, instead of listening in on the port and waiting for are request, the client's socket will actively try to connect to a remote host on their port. So, if there is no server listening in on that port, then the connection will fail.
So, two things will need to be know, those are:
What's the IP Address of the server?
What port is the server listening in on?
There are short-cuts to getting the connection using the Java Socket class, but I assume that you're going to test this out on the same machine, right? If so, then you will need two separate ports for both your client and server. That's because the OS won't allow two separate processes to share the same port. Once a process binds to the port, no other process is allowed to access it until that port releases it back to the OS.
So, to make the two separate JSP's communicate on the same physical machine, you'll need both a local port for your client, and you'll need the server's port number that it's listening in on.
So, let's try the following for the client application:
int local_port = 1079;
int remote_port = 1080;
InetSocketAddress localhost = new InetSocketAddress(local_port);
Socket client = new Socket(); //The client socket is not yet bound to any ports.
client.bind(localhost); //The client socket has just requested the specified port number from the OS and should be bound to it.
String remoteHostsName = "[put something here]";
InetSocketAddress remotehost = new InetSocketAddress(InetAddress.getByName(remoteHostsName), remote_port); //Performs a DSN lookup of the specified remote host and returns an IP address with the allocated port number
client.connect(remotehost); //Connection to the remote server is being made.
That should help you along your way.
A final note should be made here. You can't actually run these two applications using the same JVM. You'll need two separate processes for client and server applications to run.

java.net.SocketTimeoutException: Read timed out

I have an application with client server architecture. The client
use Java Web Start with Java Swing / AWT and the sert uses HTTP server / Servlet with
Tomcat.
The communication is made from the serialization of objects, create a
ObjectOutput serializes a byte array and send to the server
respectively called the ObjectInputStream and deserializes.
The application follows communicating correctly to a certain
time of concurrency where starting to show error
"SocketException read timeout". The erro happens when the server invoke the method
ObjectInputStream.getObject() in my servlet doPost method.
The tomcat will come slow and the errors start to decrease server response time until the crash time where i must restart the server and after everything works.
Someone went through this problem ?
Client Code
URLConnection conn = url.openConnection();
conn.setDoOutput(true);
OutputStream os = conn.getOutputStream();
ObjectOutputStream oss = new ObjectOutputStream(os);
oss.writeUTF("protocol header sample");
oss.writeObject(_parameters);
oss.flush();
oss.close();
Server Code
ObjectInputStream input = new ObjectInputStream(_request.getInputStream());
String method = input.readUTF();
parameters = input.readObject();
input.readObject() is where the error is
You haven't given us much information to go on, especially about the client side. But my suspicion is that the client side is:
failing to setting the Content-length header (or setting it to the wrong value),
failing to flush the output stream, and/or
not closing the output side of the socket.
Mysterious.
Based on your updated question, it looks like none of the above. Here are a couple of other possibilities:
For some reason the client side is either locking up entirely during serialization or taking a VERY LONG TIME.
There is a proxy between the client and server that is causing problems.
You are experiencing load-related network problems, or network hardware problems.
Another possible explanation is that you have a memory leak, and that the slowdown is caused by the GC taking more and more time as you run out of memory. This will show up in the GC logs if you have them enabled.
I think During high Concurrency, the Socket Timeout set in Tomcat is Expired and the connection is closed. The next read by Tomcat for that connection is greater than the server socket timeout specified in the server.
If you want to avoid this problem you have to increase the timeout on the server-side which is expired in your case. But not advisable.
BTW you did not give enough information. Did you increase the no of threads for connection in Tomcat? If you did, this surely would happen.

Categories