I have one api that retrives a huge data , I set the timeout to 7 min no more than that.
So when the waiting time exceeds 7 min I want the operation to be cancelled
However , the users sometimes wait till 10 min and more ..
Below is the code
final OkHttpClient okHttpClient = new OkHttpClient.Builder()
.connectTimeout(connectTimeOut, TimeUnit.SECONDS)
.readTimeout(readTimeOut, TimeUnit.SECONDS)
.build();
What I'm missing here?
Connection timeOut is the time to establish a connection and read/writeTimeout is the time required to read or write after establishing connection. so finally
totalTimeout = connectTimeOut + readTimeout
Your httpClient taking extra time in connectionEstablishment or readTimOut. You have to configure this two timeout in a way that user need not to wait more than your defined time. Please accept my answer if is satisfactory
Note: All the operation takes less or equal time you defined to
complete.
For your better understanding visit this link
Http Client Time Out Guide
Related
I have a problem with sending data to the client by webflux. When there is no content in Flux for about 60 seconds, response is aborted by server. Is there any possibility to "wait" for response as long as client want to?
Try this it might be helpful
.responseTimeout(Duration.ofSeconds(10))
.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 10 * 1000)
you can also refer this: Webflux Webclient - increase my Webclient time out (wait a bi more a flaky service)
Problem solved: Kubernetes has default value of 60 seconds for every request, so every time (for webflux) when there is no content, then reuqest is aborted (for me with status 200)
I fancy myself making a single request creating 15k topics in a busy Kafka cluster, in a single request, something like this:
final Admin admin = ...;
final List<NewTopic> newTopics = IntStream.range(0, 15000)
.mapToObj(x -> "adam-" + x)
.map(x -> new NewTopic(x, Optional.empty(), Optional.empty()))
.collect(toList());
final CreateTopicsResult ctr = admin.createTopics(newTopics);
ctr.all().get(); // Throws exceptions.
Unfortunately this starts throwing exceptions due to embedded timeouts - how can I properly make the request while keeping it simple without batching?
For the sake of argument let's stick to Kafka 3.2 (both client & server).
This can be configured in one of two ways:
A) Operation timeout allowed the underlying NetworkClient to wait indefinitely until the response is received. So we can change our code to admin.createTopics(newTopics, new CreateTopicOptions().timeout(Integer.MAX_VALUE)).
B) Alternatively, we could configure Admin's default.api.timeout.ms property and need not to provide the explicit timeout - which one is preferred depends on the codebase/team standards.
What was not necessary:
request.timeout.ms - it appears to apply to only a single request, however not setting it to large value (larger than expected duration) results in somewhat strange behaviour when original request appears to be failed on finish, and then repeated (to be processed immediately in case of topic-creation, as they'd have been created by initial request (this can be quite easily replicated by setting request.timeout.ms to a low value)):
Cancelled in-flight CREATE_TOPICS request with correlation id 3 due to node 2 being disconnected (elapsed time since creation: 120443ms, elapsed time since send: 120443ms, request timeout: 29992ms)
connections.max.idle.ms - the connection remains active the whole time the NC is waiting for the upstream's response.
metadata.max.age.ms - metadata fetching is needed only for the initial step (figuring out where to send the request), but later we just wait for the response from the known upstream broker.
I am looking into using AsyncHttpClient (AHC) to invoke multiple HTTP requests in parallel and calculate total response time as well as TCP connection time, DNS resolution time, time to first byte etc. of each of these async requests.
AsyncHttpClient is the only Java client I came across which emits events for TCP connection time, DNS resolution time etc.
My question is: What is the correct way to measure the start time for the HTTP request so that I can calculate different performance metrics based on that.
Is "onRequestSend" correct event to consider for start time. I am looking for an event which indicates the start of the HTTP lifecycle phase i.e creating a socket on client side to open a connection.
Documentation: https://www.javadoc.io/doc/org.asynchttpclient/async-http-client/latest/org/asynchttpclient/AsyncHandler.html
you can use Kotlin standard library:
val mark = TimeSource.Monotonic.markNow() // Returned `TimeMark` is inline class
val elapsedDuration = mark.elapsedNow()
look off-documentation
I would say it is like
long t0=System.currentTimeMilis();
doYourHttpCallAndProvideCallback(response->{
long total=System.currentTimeMIlis()-t0;
souf("It took %dms to do the request\n",total);
})
WebClientTestService service = new WebClientTestService() ;
int connectionTimeOutInMs = 5000;
Map<String,Object> context=((BindingProvider)service).getRequestContext();
context.put("com.sun.xml.internal.ws.connect.timeout", connectionTimeOutInMs);
context.put("com.sun.xml.internal.ws.request.timeout", connectionTimeOutInMs);
context.put("com.sun.xml.ws.request.timeout", connectionTimeOutInMs);
context.put("com.sun.xml.ws.connect.timeout", connectionTimeOutInMs);
Please share the differences mainly in connect timeout and request timeout.
I need to know the recommended values for these parameter values.
What are the criteria for setting timeout value ?
Please share the differences mainly in connect timeout and request timeout.
I need to know the recommended values for these parameter values.
Connect timeout (10s-30s): How long to wait to make an initial connection e.g. if service is currently unavailable.
Socket timeout (10s-20s): How long to wait if the service stops responding after data is sent.
Request timeout (30s-300s): How long to wait for the entire request to complete.
What are the criteria for setting timeout value ?
It depends a web user will get impatient if nothing has happened after 1-2 minutes, however a back end request could be allowed to run longer.
Also consider server resources are not released until request completes (or times out) - so if you have too many requests and long timeouts your server could run out of resources and be unable to service further requests.
request timeout should be set to a value greater then the expected time for the request to complete, perhaps with some room to allow occasionally slower performance under heavy loads.
connect/socket timeouts are often set lower as normally indicate a server problem where waiting another 10-15s usually won't resolve.
In apache httpclient 4.3, DefaultHttpRequestRetryHandler's code
if (exception instanceof InterruptedIOException) {
// Timeout
return false;
}
It won't retry if it's timeout. What's the reason? Sometimes, the network is not stable, I just want to retry connection. I can use my own RetryHandler, but I just want to make sure if there is any problem if I retry when timeout.
It won't retry if it's timeout. What's the reason?
Why should it? Timeouts usually defines a maximum period of inactivity between two consecutive operations. Why should the request be retried if it times out in the first place? If you are willing to wait longer for the operation to complete you should be using a greater timeout value.
This helped me. I tried to disable the retry option. The code below does the opposite.
DefaultHttpClient httpClient = new DefaultHttpClient();
DefaultHttpRequestRetryHandler retryHandler = new DefaultHttpRequestRetryHandler(0, true);
httpClient.setHttpRequestRetryHandler(retryHandler);
Thanks
I have used commercially a custom RetryHandler which mimics the Default* one, but allows retry for the following exceptions which we were getting regularly: ConnectTimeoutException and HttpHostConnectException. These exceptions were thrown a lot after a 15s timeout. The connection should be made sub-second so we now retry up to 3 times with 5s timeout, this has seen a large increase in successful connections being made on the second attempt.
We are still looking into why these connection requests aren’t being made in a timely manner between our azure app service and on-prem services.