We have a Java web service with document style and http protocol. Local this service works smoothly and fast (~ 6ms). But calling the service-methods from remote takes over 200ms.
One main reason for this delay is that the
server sends first the response http header,
the client sends in return a ACK and
then again the server sends the response http body.
This second step where the client sends the ACK costs the most time, almost the whole 200ms. I would like to avoid this step and save the time.
So that's why my question: Is it possible to send the whole response in one package? And how and where do I configure that?
Thanks for any advice.
I'm not fully understanding the question.
Why is the server sending the first message? Shouldn't the client be requesting for a web service via HTTP initially?
From what I understand, SOAP requests are wrapped within an http message. HTTP messages assumes a TCP connection and requires a response. This suggests that a client must respond when the server sends an http header.
Basically whatever one end sends to another, the other end must reply. The ACK return from you step 2 will always be present.
EDIT:
I think the reason for the difference in time when requesting via local and remote is simply the routing that happens in the real network versus on your local machine. It's not the number of steps taken in your SOAP request and response.
Related
I'm wondering about the scenario that the client is going to do data streaming. During that process, it will send three requests. Let's assume that the server will receive only two.
How in current situation server will react? I guess that server will never notify the client about the finished request (it knows the number of requests that are expected) and the process will get terminated as long as the deadline has been defined. Do my assumptions are valid?
I'm working on the Java implementation of gRPC.
That's correct. If the server is waiting for the 3rd request which it never receives, the call will be terminated by the deadline.
I need some assistance on a project I am working on. It's a library itself using Jersey 1.x (1.19.1) aiming at HTTP posting a JSON document and getting the corresponding JSON response from a server.
I am facing a problem when the response from the server is "big". The JSON document that is posted by my client application contains several jobs that must be executed by the server, and
the JSON document sent back by the server is made of the outputs of these jobs. The jobs can be considered independent from each other. The server works in streaming mode, which means it
starts to process the jobs before it receives the entire JSON document posted by the client. And it starts to send the outputs of the jobs as soon as they are finished. So the server
starts to reply to my client application while it is still posting the request. Here is my problem. When the request gets big so gets the response (more jobs to do), my application freezes
and at some point terminates.
I spent a lot of time trying to figure out what's happening and here is what is found and what I infered.
Jersey, for handling HTTP communication is using a class from the JDK (in rt.jar) I forgot the exact name and don't have access to my work right now but let's call it HttpConnection.
In this class there is a method checkError() that is invoked and throws a IOException with only a message saying it was impossible to write to server.
Debugging I was able to understand that an attribute of this class named trouble was set to true because a write() method caught an IOException before. checkError() throws a
IOException based on that trouble boolean flag. It's not possible to easily see the cause IOException because the classes of the JRE are compiled without the debugging symbols but
I managed to see that this IOExeption was a "connection reset by peer" problem.
Then I tried to understand why the server resets the connection. I used a HTTP proxy that captures the HTTP traffic between my client application and the server but this gave me no more clues,
it even seems that the proxy is unable to handle properly the connection with the server as well!
So I tried to use Wireshark to capture the traffic and see what's wrong. Here is what I found.
On client side, packets corresponding to the post of the request JSON document are sent and the server starts to reply shortly after, as explained above. The server side sends
more and more packets and I noticed that the buffer of the TCP layer (called TCP window in Wireshark) on client side has a size that decreases more and more as the server sends packets.
Until it beomes full (size: 0 byte). So the TCP layer on server side cannot send data to the TCP layer on client side anymore and thus becomes full too. The conversation, in the end is
only about retrying to send data, on both sides, failing again and again. Ultimately the server decides to send a reset packet. This corresponds to the cause IOExcpetion I mentioned
above I believe.
My understanding is: as long as the server does not start to stream the response everything is fine. When the server starts to send the response, the TCP buffer on client side starts to
get filled. But as the client application does not read the response yet, the content of this buffer is not consumed. When the server has sent enough data to fill this buffer it cannot
send anymore data and the buffer of its TCP layer gets full too because the server continues to push data. As a result, the client application cannot finish to send the request JSON
document. The communication is blocked on both sides and the server decides to reset the connection.
My conclusion is: the code, as currently written, does not support such full duplex communication, because the response from the server is not consumed as it is received. Indeed, walking
through the Jersey code that is executed by my library, by debugging, it is clear that the pattern is:
first: connection.getOutputStream().write()
and then: response.getInputStream().read()
In my opinion, the root cause of the problem is that the library I am working on uses Jersey in this synchronous manner which does not fit well the way the server works (streaming the
response while the request is still being sent to it).
I searched a lot on the Internet a solution keeping Jersey 1.19.1 for me to improve the library with as few impacts as possible but I failed. This is the reason why I am asking help
here now ;)
So basicaly my question is: is it possible to do what I need to do keeping Jersey client library 1.19.1 and if yes how? If not, what HTTP client library should I use for my library (to
write a post request and read the corresponding response at the same time) and if you could give me a basic example so I can be on track quickly it would be much appreciated.
One last thing: curl works just fine, I can fully post the exact same JSON document and get the response using it, so there is no problem on server side as I suspected at the very
beginning of my investigation. And it scales fine (I tried to send huge JSON documents). Of course I made sure the HTTP header of the post is the same in the case of my library and in the
curl case.
Thanks a lot for reading me and for your answers.
Best regards,
Loïc
My client sends 12 requests(nothing could be wrong since they are very similiar) through loop to a Servlet on the server(Tomcat).
When I see the app server access log, I only see 8 of them. I am not sure if the client sent all requests to the server successfully.
Could someone verify that request isn't logged to access_log until the response is available. If this is the case, even all requests reached app server correctly, but four responses are not available.
Is there anyway to find out why do request get lost? Is there any time out issue on the server side? For example, if it takes too long to respond, it drops the request.
By the way, I am running both client and server on my local machine.
It can't be written until the response is sent, otherwise it can't know what the response code was, but it is also subject to buffering and flushing.
I want to identify which response corresponds to which request in Java.
It is easy if each request has its own connection, but if many requests are sent and received through the same socket, it will become hard.
After searching the web, it seems that the solution is to add a custom HTTP header.
Since I do not have access to the server, my question is: Can I add a field to http request and have the server return the same field to me?
EDIT: To be more clear, I am writing a proxy server. So I will receive a http request and relay it to the server and do the same for the response. My question if focused on the requests that are sent through a specific socket.
can an asynchronous web service be achieved with java spring-ws framework like how it's explained in here
basically when a client sends a request to the server for the first time, the web service on the server will callback the the client whenever it has information based on the request. so that means the server may reply more than once based on the first initial request from the client.
Suggested approach as per my experience:
Asynchronous web services are generally implemented in the following model:
CLIENT SUBMIT REQUEST -> SERVER RETURNS 202 ACCEPTED RESPONSE(polling/JOB URL in header) -> CLIENT KEEP POLLING THE JOB URL -> SERVER RETURNS 200 OK for the JOB URL ALONG WITH JOB RESPONSE IN BODY.
You may need to define few response body for job in progress. When client polls the server and server is still processing the request, the body should contain the IN PROGRESS message in a predefined form for the client. If server finished processing, then the desired response should be available in the body.
Hope it helps!