My client sends 12 requests(nothing could be wrong since they are very similiar) through loop to a Servlet on the server(Tomcat).
When I see the app server access log, I only see 8 of them. I am not sure if the client sent all requests to the server successfully.
Could someone verify that request isn't logged to access_log until the response is available. If this is the case, even all requests reached app server correctly, but four responses are not available.
Is there anyway to find out why do request get lost? Is there any time out issue on the server side? For example, if it takes too long to respond, it drops the request.
By the way, I am running both client and server on my local machine.
It can't be written until the response is sent, otherwise it can't know what the response code was, but it is also subject to buffering and flushing.
Related
The problem:
I am having some strange behaviour from a Jetty server (rest over https) when some client connections are closed (client-side) before the server has had time to reply. Normally this is well managed and expected by a webserver/application server but in a specific instance something breaks the server that stops replying.
I am trying to reproduce programmatically and locally the issue, opening a client connection and closing it before the server has had time to reply, but I do not have much experience with a situation like this, normally the clients I write are expected to not die immediately.
I am not interested in the language/application I have to use to replicate my case, it can be a Java program, a netcat command, telnet, dotnetcore... The only limit I have is that it should run on a Kubernetes pod, if possible.
I am trying to use Java to open a socket then close it immediately, or to create an Http client and stop it immediately after a request sent, but with no luck at the moment.
At the same time I am looking at netcat, but I fear it's too low level for a rest request.
I'm wondering about the scenario that the client is going to do data streaming. During that process, it will send three requests. Let's assume that the server will receive only two.
How in current situation server will react? I guess that server will never notify the client about the finished request (it knows the number of requests that are expected) and the process will get terminated as long as the deadline has been defined. Do my assumptions are valid?
I'm working on the Java implementation of gRPC.
That's correct. If the server is waiting for the 3rd request which it never receives, the call will be terminated by the deadline.
-- EDIT: --
To rephrase the question.
Does HTTP know anything about the status of underlying TCP connection?
TCP is a reliable protocol. When the server sends data to the client it expects an acknowledgment signal from that client. What happens in HTTP when the underlying server side TCP connection fails to receive the ACK signal?
-- ORIGINAL Question: --
I am trying to solve a design issue on our HTTP client/server app.
Here is the situation:
The server runs on Tomcat, and we are somewhat limited to using Jersey or Servlets for the server side implementation.
The client requests data from the Server, which once read is deleted.
Data must not be deleted if the client has not received it.
There is no confirmation from the client if the data is received or not.
The client impl cannot be changed in any way.
The network connection is unstable and can be interrupted for long periods of time (e.g. 30 sec.) and also often.
The problem: if the client made a request and shortly after lost connection to the server, the server will not recognize this and it will delete and send the data to the client over the dead connection.
Ideally, we want to get an IOException when flushing the data stream to the client and handle it accordingly:
try (ServletOutputStream outputStream = httpServletResponse.getOutputStream()) {
outputStream.write(bytes);
outputStream.flush();
} catch (Exception e) {
// TODO: do something ...
}
I simulated this locally by killing the client shortly after sending the request or by setting a very low client read timeout value. In both cases I got a server side exception (with bioth Jersey and Servlets).
The last test was sending a request over a network and pulling the network cable in the process.
Unfortunately I did not get the expected result. The server streamed the data back without recognizing the interrupted connection.
So, does anyone have an idea how to force a Server side exception when the connection to the client is broken?
Any other ideas that don't involve using Sockets or confirmation calls from the client?
Thanks in advance!
Instead of deleting the file in real time, you can write a message on a queue in order to delete it later. The delete would have to check a database where you write if the client received the file completely.
I don't think there's a way to know for certain whether the data arrived to the client unless the client sends an acknowledgement message.
The only solution seems to be not actually deleting the data, but keeping it and setting a 'deleted' flag. But since I don't know the particular use case, I'm not sure if this helps...
TCP is a two way protocol.
If you set up an input stream and call InputStream.read(), this should return -1 if the client has disconnected.
More detail here:
Java Sockets: check if client is able to receive message from server
I need some assistance on a project I am working on. It's a library itself using Jersey 1.x (1.19.1) aiming at HTTP posting a JSON document and getting the corresponding JSON response from a server.
I am facing a problem when the response from the server is "big". The JSON document that is posted by my client application contains several jobs that must be executed by the server, and
the JSON document sent back by the server is made of the outputs of these jobs. The jobs can be considered independent from each other. The server works in streaming mode, which means it
starts to process the jobs before it receives the entire JSON document posted by the client. And it starts to send the outputs of the jobs as soon as they are finished. So the server
starts to reply to my client application while it is still posting the request. Here is my problem. When the request gets big so gets the response (more jobs to do), my application freezes
and at some point terminates.
I spent a lot of time trying to figure out what's happening and here is what is found and what I infered.
Jersey, for handling HTTP communication is using a class from the JDK (in rt.jar) I forgot the exact name and don't have access to my work right now but let's call it HttpConnection.
In this class there is a method checkError() that is invoked and throws a IOException with only a message saying it was impossible to write to server.
Debugging I was able to understand that an attribute of this class named trouble was set to true because a write() method caught an IOException before. checkError() throws a
IOException based on that trouble boolean flag. It's not possible to easily see the cause IOException because the classes of the JRE are compiled without the debugging symbols but
I managed to see that this IOExeption was a "connection reset by peer" problem.
Then I tried to understand why the server resets the connection. I used a HTTP proxy that captures the HTTP traffic between my client application and the server but this gave me no more clues,
it even seems that the proxy is unable to handle properly the connection with the server as well!
So I tried to use Wireshark to capture the traffic and see what's wrong. Here is what I found.
On client side, packets corresponding to the post of the request JSON document are sent and the server starts to reply shortly after, as explained above. The server side sends
more and more packets and I noticed that the buffer of the TCP layer (called TCP window in Wireshark) on client side has a size that decreases more and more as the server sends packets.
Until it beomes full (size: 0 byte). So the TCP layer on server side cannot send data to the TCP layer on client side anymore and thus becomes full too. The conversation, in the end is
only about retrying to send data, on both sides, failing again and again. Ultimately the server decides to send a reset packet. This corresponds to the cause IOExcpetion I mentioned
above I believe.
My understanding is: as long as the server does not start to stream the response everything is fine. When the server starts to send the response, the TCP buffer on client side starts to
get filled. But as the client application does not read the response yet, the content of this buffer is not consumed. When the server has sent enough data to fill this buffer it cannot
send anymore data and the buffer of its TCP layer gets full too because the server continues to push data. As a result, the client application cannot finish to send the request JSON
document. The communication is blocked on both sides and the server decides to reset the connection.
My conclusion is: the code, as currently written, does not support such full duplex communication, because the response from the server is not consumed as it is received. Indeed, walking
through the Jersey code that is executed by my library, by debugging, it is clear that the pattern is:
first: connection.getOutputStream().write()
and then: response.getInputStream().read()
In my opinion, the root cause of the problem is that the library I am working on uses Jersey in this synchronous manner which does not fit well the way the server works (streaming the
response while the request is still being sent to it).
I searched a lot on the Internet a solution keeping Jersey 1.19.1 for me to improve the library with as few impacts as possible but I failed. This is the reason why I am asking help
here now ;)
So basicaly my question is: is it possible to do what I need to do keeping Jersey client library 1.19.1 and if yes how? If not, what HTTP client library should I use for my library (to
write a post request and read the corresponding response at the same time) and if you could give me a basic example so I can be on track quickly it would be much appreciated.
One last thing: curl works just fine, I can fully post the exact same JSON document and get the response using it, so there is no problem on server side as I suspected at the very
beginning of my investigation. And it scales fine (I tried to send huge JSON documents). Of course I made sure the HTTP header of the post is the same in the case of my library and in the
curl case.
Thanks a lot for reading me and for your answers.
Best regards,
Loïc
We have a Java web service with document style and http protocol. Local this service works smoothly and fast (~ 6ms). But calling the service-methods from remote takes over 200ms.
One main reason for this delay is that the
server sends first the response http header,
the client sends in return a ACK and
then again the server sends the response http body.
This second step where the client sends the ACK costs the most time, almost the whole 200ms. I would like to avoid this step and save the time.
So that's why my question: Is it possible to send the whole response in one package? And how and where do I configure that?
Thanks for any advice.
I'm not fully understanding the question.
Why is the server sending the first message? Shouldn't the client be requesting for a web service via HTTP initially?
From what I understand, SOAP requests are wrapped within an http message. HTTP messages assumes a TCP connection and requires a response. This suggests that a client must respond when the server sends an http header.
Basically whatever one end sends to another, the other end must reply. The ACK return from you step 2 will always be present.
EDIT:
I think the reason for the difference in time when requesting via local and remote is simply the routing that happens in the real network versus on your local machine. It's not the number of steps taken in your SOAP request and response.