I'm exposing a SAOP Webservice via CXF and Apache Camel.
My client wants to NOT RECEIVE a response after 5 sec of the request.
How can I do it technically?
Thanks a lot.
Yes, but you do not need to do it by yourself. A timeout specifies the patience of the caller.
CXF has two timeouts, see the documentation for more information:
connectionTimeout: Specifies the amount of time, in milliseconds, that the consumer will attempt to establish a connection before it times out.
receiveTimeout: Specifies the amount of time, in milliseconds, that the consumer will wait for a response before it times out.
In your case, the CXF client must set a receiveTimeout of 5000 (milliseconds). Perhaps you also want to customize the connectionTimeout. With the receiveTimeout set, the client sends a request against the server and if the server has not startet to send a response within 5 seconds, the client aborts the request.
I think you will get a SocketTimeoutException it the timeout occurs, but not sure about this.
Related
I have a spring boot application with an embedded tomcat server. To limit the impact of DOS attacks i've set the property server.tomcat.connection-timeout to 3 seconds. A connectionTimeout is the limit of time after which the server will automatically close the connection with the client.
So if in my case the client takes more then 3 seconds to finish the request the connection will automatically time out. However, its not yet clear to me what exactly happens when instead it is a process on the server side that is causing a delay.
To give an example, My web application is using a hikari connection pool that manages connections to the database. It can have a maximum of 10 database connections. If all 10 are in use any incoming request will have to wait for one of the database connections to become available. If this wait takes more then 3 seconds, will the tomcat connection time out? Or will the connection remain available since the delay isn't caused by the client?
Thank you
According to the Tomcat 9.0 documentation, the connection-timeout is:
The number of milliseconds this Connector will wait, after accepting a connection, for the request URI line to be presented. [...] Unless disableUploadTimeout is set to false, this timeout will also be used when reading the request body (if any).
That is a time taken for the client to send the request. This is unrelated to the time that the server takes to respond to the request.
So ...
If this wait takes more then 3 seconds, will the tomcat connection time out?
No, it won't1. In fact, it appears that Tomcat doesn't have any limits on how long a (synchronous) request may take to complete.
Of course, the client could timeout a request if the server takes too long. It is unlikely that the server will notice this so that it can abandon the request.
1 - Assuming that the documentation is accurate. However, that config option has been present for a number of Tomcat versions, with the same description. If the documentation was wrong, this would surely have been noticed, reported and fixed.
I'm facing to this issue:
I have a grizzly embedded http server running. By sending 200 asynchronous requests to the server (using ExecutorService in java), I thought it would serve all these request at a time but I release that the server only serves 8 request in a time and no error thrown. Please give me an explaination for this. Do I misunderstanding anything?
Are you sure that all requests have arrived at the server? Do you release resources after the program is processed? If you send multiple requests at the same time and exceed the program's tolerance limit, you will wait. Have you done all these controls?
I am using Apache HTTP client to contact an external service. The service can take a few hours, if not longer, to generate its response. I've tried a few different things but have either ended up with socket or read timeouts. I've just tried using the RequestConfig to set the socket and connection timeout to 0 which according to the documentation should be infinite but the request always returns after exactly 1 hour. Any thoughts?
I agree with general sentiments about not trying to keep HTTP connections alive so long, however, if your hands are tied, you may find you are hitting timeouts in TCP and TCP level keep-alives may save the day.
See this link for help setting TCP keep-alive, you cannot do it in HttpClient its an OS thing, this will send ACKs regularly so your TCP connection is never idle even if nothing is going on in the HTTP stream.
Apache HttpClient TCP Keep-Alive (socket keep-alive)
Holding TCP connections for a long time even if they are active is hard. YMMV.
Ideally, any service that takes more then few minutes(2-3 minutes+ or so), should be handled asynchronously, instead keeping connection open for an hour or so long. It is waste of resources both client and server side.
Alternate approaches could be to solve these kind of problems.
You call the service to trigger processing(to prepare response). It may return you some unique request ID.
Then after an hour or so(once response is ready with response), either client request again by passing the request ID, and server returns the Response.
Other alternate approach could be, once response it ready, it pushes back the response to Callback URL or something where Client host another service specifically for receiving the response prepared by the server(step#1).
We have a Java web service with document style and http protocol. Local this service works smoothly and fast (~ 6ms). But calling the service-methods from remote takes over 200ms.
One main reason for this delay is that the
server sends first the response http header,
the client sends in return a ACK and
then again the server sends the response http body.
This second step where the client sends the ACK costs the most time, almost the whole 200ms. I would like to avoid this step and save the time.
So that's why my question: Is it possible to send the whole response in one package? And how and where do I configure that?
Thanks for any advice.
I'm not fully understanding the question.
Why is the server sending the first message? Shouldn't the client be requesting for a web service via HTTP initially?
From what I understand, SOAP requests are wrapped within an http message. HTTP messages assumes a TCP connection and requires a response. This suggests that a client must respond when the server sends an http header.
Basically whatever one end sends to another, the other end must reply. The ACK return from you step 2 will always be present.
EDIT:
I think the reason for the difference in time when requesting via local and remote is simply the routing that happens in the real network versus on your local machine. It's not the number of steps taken in your SOAP request and response.
I am trying to set time out error whenever web service response will be delayed with below lines of code-
serviceStub._setProperty("weblogic.wsee.transport.connection.timeout", String.valueOf(timeoutSeconds));
BindingInfo bindingInfo = (BindingInfo)serviceStub._getProperty("weblogic.wsee.binding.BindingInfo");
bindingInfo.setTimeout(timasseoutSeconds);
But it’s not working.
Server Used – Oracle Weblogic server 10.3
Type of Web service – JAX-RPC
Please reply, if someone has solution for it.
There are two kinds of timeout (See What is the difference between connection and read timeout for sockets?)
weblogic.wsee.transport.connection.timeout
Specifies, in seconds, how long a client application that is attempting to invoke a Web service waits to make a connection. After the specified time elapses, if a connection hasn't been made, the attempt times out.
weblogic.wsee.transport.read.timeout
Specifies, in seconds, how long a client application waits for a response from a Web service it is invoking. After the specified time elapses, if a response hasn't arrived, the client times out.
You should set sensible values for both. See this answer for an example.
I think this is what you need: weblogic.wsee.transport.read.timeout
Got that from here: http://docs.oracle.com/cd/E14571_01/web.1111/e13760/client.htm