Every five seconds a function :
private boolean ping() {
try {
URL pingServerUrl = new URL(serverResourceLocator);
HttpURLConnection connection = (HttpURLConnection) pingServerUrl.openConnection();
if(connection.getResponseCode() == 200) {
lastPingSuccessful = true;
}
System.out.println("pinged");
}catch(Exception exc) {
exc.printStackTrace();
return lastPingSuccessful;
}
return lastPingSuccessful;
}
is called. It is type of ping function. It tries to connect to the servlet on the server and it sends some credentials along with the URL serverResourceLocator . The thing that bothers me is that a new connection is opened every 5 seconds.
How can I avoid it ?
You can't recycle an HTTP connection. HTTP is a stateless protocol. The best you can do is to not close the connection once it is opened and keep it alive and send a heartbeat message from the server down to the client.
If the use of the function is what I am guessing it to be, i.e. to test the servlet, it is best to leave the connection part in; you get more coverage out of your test script. A connection every 5 seconds is hardly anything. The server would not feel anything.
Else just store the connection as a global variable. And reuse it every time you make a request.
I don't think there is any alternative to this. You will have to make http connection to get status of the URL.
Below 2 are lightweight alternatives of Java
Unix curl utility
Ajax call from your java script
Let's say we want to remove client's data after EXPIRY_TIME.
I think one of the two methods can be followed:
You can check with some cron job (scheduler) in session modified time (which suggests last access time)
For each client request, set access log in flat file system and check with cron job in access log
In any of the case, if last access time > EXPIRY_TIME then remove client's data.
This approach will save round trips and help to reduce traffic.
You can release the connection once the job is done.Anyhow java webserver is designed to take multiple request.Even though every 5 seconds request is coming to the server,its not a problem.
Related
So I have a problem with a Java program I have. The program's basic functionality includes basically connecting to a web API for data. The function that does that is something like this:
public static Object getData(String sURL) throws IOException {
URL url = new URL(sURL);
URLConnection request = url.openConnection();
request.connect();
return request.getContent();
}
The code works fine as it is, but recently, after my house changed ISPs, I have found that sometimes the connections take an unreasonably long amount of time, something like 10 seconds or more in about 10% of attempts, while the other 90% takes only around 200ms. I have found it to be faster to ask my program to call the function again in a different thread than to wait for some of these connections to finally connect.
Therefore, I want to change the function so that if after 500ms, the connection did not establish, it would disconnect and a new connection would be attempted. How could I do this?
Somewhere online I read that HttpURLConnection might help, but I am not sure how.
URLConnection allows you to specify the connect and read timeout prior to calling connect():
https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/net/URLConnection.html#setConnectTimeout(int)
Sets a specified timeout value, in milliseconds, to be used when
opening a communications link to the resource referenced by this
URLConnection. If the timeout expires before the connection can be
established, a java.net.SocketTimeoutException is raised. A timeout of
zero is interpreted as an infinite timeout.
With 500ms timeout:
try {
URLConnection request = url.openConnection();
request.setConnectTimeout(500); // 500 ms
request.connect();
// on successful connection
} catch (SocketTimeoutException ex) {
// on request timeout
}
This you can pack into a loop, but I recommend limiting the number of attempts made.
Java's URLConnection doesn't have retry capabilities in Java 8 therefore the best way here to achieve this - use an appropriate standalone 3-party library such as Apache HttpClient.
This is by far the best standalone 3-party HTTP client with advanced capabilities as of 2020 and it's still maintained.
By default as of version 5.2.x Apache Http Client, Apache Http Client uses the default implementation of org.apache.http.client.HttpRequestRetryHandler, which retries 3 times, but you can use a custom implementation instead.
The configuration might look like this(full imports are for example's sake):
org.apache.http.client.HttpClient httpClient = org.apache.http.impl.client.HttpClients.custom()
.setRetryHandler(YourCustomImplOfTheRetryHandlerClass)
//other config
.build();
There is no way I can reproduce that problem using my ISP.
I suggest you dig deeper into the problem and find a better solution. Sending another request just doesn't seem good enough to me. Maybe try a different way to get the data and see if that works for you. Can't say for sure as I can't reproduce the problem.
I have a controller in spring which getting a POST request which is handling as asynchronous(using DeferredResult object as a return value).
The response for this request is writing bytes to the HTTP stream directly (HttpServletResponse.getWriter().print()) and when it's done writing it sets result on the DeferredResult object for close the connection.
I'm writing my response in stream chunks.
I have an issue in this request handling because the client is closing the connection if I'm not writing to it for 1 minute. (I can write some chunks and then stop writing for 1 minute - therefore the connection will be closed in the middle of my procedure).
I want to control the closing connection procedure - I want to send keep alive when I'm not writing any data to the stream so that the connection won't be closed until I decided to close it from the server-side.
I didn't find out how should I get control of the connection from the controller in the server.
Please assist.
Thanks.
There is no such thing as a "keep alive" during an ongoing request or response in HTTP which can help with idle timeouts when receiving a request or response.
HTTP keep alive is only about keeping the TCP connection open after a response in order to process more requests on the same connection. TCP keep alive is instead used to detect connection loss without TCP shutdown and can also be used to prevent idle timeouts in stateful packet filters (as used in firewalls or NAT routers) in between client and server. It does not prevent idle timeouts at the application level though since it does not transport any data visible to the application level.
Note that the way you want to use HTTP is contrary to how HTTP was designed originally. It was designed for a client sending a full request and the server sending a full response immediately and not for the server sending some parts of the response, idling some time and then send some more. The proper way to implement such behavior would be by using WebSockets. With WebSockets both client and server can send new messages at any time (i.e. no request-response schema) and it also supports keep-alive messages. If WebSockets are not an option you can instead implement a polling client which regularly polls for new data from the server with a new request.
I ran into similar need just recently. The server code executes a long running operation that can take as long as 30 minutes to return, and the client times out long before that. The solution was to have the long running operation send periodic "keep alive" packets of data to the client via a "callback" argument provided by the request handler method. The callback is nothing more than a function (think of Lambda in Java) that takes as parameter the "keep alive" data packet to send to client, and then writes that data packet to the client via the java.io.PrintWriter reference that you can get off of javax.servlet.http.HttpServletResponse. Below code is the handler method that does this. I had to refactor the code in the call hierarchy to accept this new "callback" parameter until the "callback" can reach the method that is performing the long running operation, and inside that code I invoke the "callback" every so often, for example every time 10 records are processed. Not that below is Groovy code (scripting code on top of Java that runs on the JVM) and the server-side framework is Spring,
...
#Autowired
DataImporter dataImporter
#PostMapping("/my/endpoint")
void importData(#RequestBody MyDto myDto, HttpServletResponse response) {
// Callback to allow servant code deep in the call hierarchy to report back to client any arbitrary message
Closure<Void> callback = { String str ->
response.writer.print str
response.writer.flush()
}
// This leads to the code that is performing a long running operation. Using
// this "hook" that code has a direct connection to the client whereby
// it can send packets of data to keep the connection from timing out.
dataImporter.importData(myDto, callback)
}
}
In our java mail (using Java Mail API) application we first connect to the mail server, fetch messages, process headers and then afterwards process the message bodies and attachments using pop3 as usual.
Session session = Session.getInstance(props, null);
Store store = session.getStore(urln);
store.connect();
Folder f = store.getFolder("INBOX");
f.open(READ);
Messages m = f.getMessages(..);
for (Message m : messages) {
if (!store.isConnected()) {
//raise exception
}
processSubject();
processFrom();
processBodyAndAttachments();
..
}
The implementation works fine on most environments, but on some customer the storeconnection gets lost during the process in the for loop. We can see the raises exception in the logs. My questions:
AFAIK, the mail server can sometimes reject new connections, but does
it terminate current living connections (may be becasue of too much
connections or disconnects old ones to give access to the new ones?)
When the store is disconnected, does the folder gets closed too?
Is it better to check the folder?
The connection may be lost everywhere in the for loop and it does not
seem to be a good practise to put isConnected check everywhere in the
loop, it will make the code dirty and also cause performance issues,
is it a good practise to put in a try catch block and check for
IOExceptions? (Folder closed) Or other suggestions? Which exceptions
should be handled? There may be some cases where the message is not
parseable but connection is healthy.
What about adding a disconnect listener?
Network connections can be broken for a variety of reasons. Your program always has to be prepared for the connection to drop at any time.
With POP3, there is only one connection, so if the connection is dropped the store should be disconnected and the folder should be closed.
If the Folder is open, check the Folder. Otherwise check the Store.
You need a strategy for handling failures. If you keep track of what messages have been successfully processed you may be able to restart the processing at the next message after a failure. A lot of the details depend on your environment and your application requirements.
A disconnect listener won't make this easier.
I need some help with the following problem:
I open a tcp-socket in the constructor then proceed to provide a object over an object output-stream to the server. I have no control over the server and don't get any response back.
How can I detect that the connection was lost? Will I always get the IOExeption-Error when trying to write? Because according to javadoc once a connection was successfully made most of the checks are basically useless to me.
Additionally what is the best way to reconnect a socket? Set the reference to "Null" then create a new one?
Here is my current approach:
I have a status-list in which I have the following statuses:
SocketSuccess; SocketFailure; MessageSuccess; MessageFailure;
My idea is kind of like a state-machine so check first what the last status was. If the connection was successfull or the last message was successfull then try to send the message. When I get a IOExeption then set the status MessageFailure, save the Message locally till I get a successfull connection again.
Or are there any recommended patterns for this kind of situation?
Clearing all your douts. If the connection with the server is lost then the client will throw IOException and that will kill the application but if you have handled the exception and tried to reconnect with the server and Re-establish the input output stream the your message function will start again. The predefined messages you are using will travel only when there is a connection between server and client. So when the connection is lost you will get IOException and when you handle that exception and try to reconnect a new input output stream should be established that will carry your messaging service.
I encountered an issue with getInputStream method in URLConnection class. I'm aware there are some other similar issues discussed in other threads, but no single solution seemed to work in my case.
The funny thing is that as first execution goes well, further ones fail (block). Prior to describing the issue I'd like to write some background. Here it is.
Basically I have simple client-server configuration. As I don't want to hardcode server address and port in client app, I employ HTTP server (nginx), from which actual connection parameters can be retrieved.
On the client side, there's a 'network thread', that is controlled by service. Service starts the thread and can interrupt it when needed. At the very beginning of run() method there's invocation of following function:
private ConnectionParameters obtainConnectionParameters(String url) throws MalformedURLException, IOException {
URLConnection connection = new URL(url).openConnection();
InputStream in = connection.getInputStream(); // here the problem occurs
... // do some processing
in.close();
return connectionParameters;
}
When connection parameters are obtained, another socket connection is opened. After some time thread may be closed or simply reach end of run() method. I double-checked that it exits cleanly.
Returning to the problem, I have no idea what may be causing this to happen. Do you have any clues what can possibly causing this behavior?
I'd also like to mention that service and the network thread are running in separate (background) process from activities. There's no other place in this proces where URLConnection is used. It's worth noticing that all variables used in method obtainConnectionParameters are local.
I suppose that nothing crucial is missing in the description. Otherwise please let me know, so I can edit my post.
EDIT (1):
I have just tried apache HTTP client as in thread Make an HTTP request with android
and it worked well. I'd love to find out what is wrong with URLConnection, though.
If I understand you correctly, the code snippet above is called multiple times, and the first time it works fine, but the second time it blocks on the getInputStream() call?
The problem could be on the server side. Maybe the server is only accepting one connection at a time, and the first connection you made is still open? Is it possible to open the url with a browser multiple times, to verify that the server works as expected?