JavaMail connection timeout is not working as per properties - java

Javax mail version used 1.6.2
manually setting JavaMailSender
Timeout thing I tried with mail.smtp.timeout & mail.smtps.timeout.
And, I tried with both String & Integer value 3000.
String timeOut = "3000";
Properties pros = new Properties();
pros.put("mail.smtp.timeout", timeOut);
pros.put("mail.smtp.connectiontimeout", timeOut);
pros.put("mail.smtp.writetimeout", timeOut);
pros.put("mail.smtp.auth", "true");
pros.put("mail.smtp.starttls.enable", "true");
jmailSender.setJavaMailProperties(pros);
return jmailSender;
It's taking around 7 seconds without any fail.
Since by default is infinite, so most probably it is not setting somehow
Are any properties missing or something else?

The properties mail.smtp.connectiontimeout and `mail.smtps.connectiontimeout only apply while establishing the connection. It is not related to any timeouts during transport.
The properties mail.smtp.timeout and mail.smtps.timeout are related to the time blocked waiting for a read. This is related to reading SMTP response codes.
The properties mail.smtp.writetimeout and mail.smtps.writetimeout are related to writing chunks of data which can vary in size.
None of these timeouts represent a deadline for a single transaction of sending a mime message. What is happening is that there is no single action (connect, read, write) that is exceeding the 3000ms.
For example, connect could take 1000ms, followed by say 30 requests (write) and response parsing (reads) that take 100ms, and set of say 3 writes to send the message that take 1000ms each due to the speed of the network and size of the message. That is 1000 + (30 * 100) + (3 * 1000) = 7000ms total time but no single action exceeded the timeouts.
In a test environment
Set all timeouts to 3000.
Set connectimeout to 1 and test. You should see the connection fail.
Restart the test by setting it back to 3000 and set timeout to 1. You should see the reads fail.
Restart the test by setting it back to 3000 and set writetimeout to 1. You should see the transport fail.
If the test doesn't act this way you either haven't set the properties correctly (typo or smtp vs. smtps). Or you are really lucky to have such low latency.

Related

why did i get redis command timeout when using brpop comand whose timeout option value(50 seconds)is less than redis command timeout(200s) setting?

I'm using redis list as a distributed blocking queue. On the client side, I use the following code:
public String tryAquire(String appName, long timeout, TimeUnit timeUnit){
return String.valueOf(redisTemplate.opsForList.rightPop(getKey(appName), timeout, timeUnint));
}
It uses brpop command internal and the timeout value is set less than 50 seconds. This service works fine for about two weeks, until recent last 2 days I got a few exceptions of this:
org.springframework.QueryTimeoutException:Redis command time out;
nest exception is io.lettuce.core.RedisCommandException:
Command time out after 200 seconds(s)
This exception appears 1 or 2 times a day for about 2000 requests per day and after this exception the server still works fine and subsequent requests' cost time become normal, but the request which throws this exception would cost more than 200 seconds and that is a very bad case.
This time out value (200 seconds) fits my lettuce client side command timeout setting.
However, for the tryAquire(appName, timeout, timeUnit) method, the maximum blocking time is set less than 50 seconds. Thus this command should not cost time which is so much longer than 50s, because after 50 seconds if there is no elements in redis list it should just return null rather than keep waiting. It seems like there is no network issue for there is no socket related exception in log and after the redis command timeout exception the subsequent request is executed successfully.
Just in case someone encounteres the same problem. Long story short:The NAT mapping expires and the broken network link is awared of by the application after max tcp retries has been reached.
This service is deployed at a cloud environment which uses a customed SDN. The NAT mapping expires if there is no activity on a tcp connection for some time. However both tcp client and server can't be aware of that and the client would keep trying sending data even if the NAT mapping has expired, until some tcp retries max limit configuraion is reached.
A simple solution: Just set the redis server "tcp-keepalive" configuration value smaller than the NAT mapping expire time or use any other heartbeat mechanism if you don't want to change the redis server setting.

HttpURLConnection timeout seems to be ignored

I'm trying to properly configure the timeouts for my connections using HttpURLConnection.
My problem is that after the getResponseCode() call It always timeouts after 60 seconds instead of the value I set. My code:
URL url = new URL(uri.toString());
HttpURLConnection connection = (HttpURLConnection)url.openConnection();
connection.setConnectTimeout(15000);
connection.setReadTimeout(15000);
int responseCode = connection.getResponseCode();
What do I am missing?
I also encountered the same kind of problem a few days back, So did some research.
My problem is that after the getResponseCode() call It always timeouts after 60 seconds instead of the value I set.
this is because InetAddress.getByName(String) does a DNS lookup. That lookup is not part of the connection
timeout.
The JDK doesn't let you specify a timeout here. It simply uses the timeouts of the underlying
name resolution mechanism.
Anyway, I suspect the effect is not limited to Java. You should be able to observe the same
timeouts using nslookup or the host command from a terminal. In a "normal" environment DNS
lookup timeouts should be of the order of 1-3 seconds, but not 20 seconds. So I strongly suspect
your network setup is broken.
Several things can lead to such insane timeouts:
DNS server not reachable (UDP port 53), but ICMP is filtered, so the client cannot fail fast
local firewall on DNS server dropping packets on closed TCP ports instead of sending RST
intermediate firewalls blocking ICMP messages
lookups performed over IPv6, but missing IPv6 connectivity
AAAA record lookups before A record lookup
your DNS server performs full recursion but no caching. Clients should always query a DNS cache, never a recursor only.
Workaround: You may perform the lookup before sending the request, so the result is already
pre-cached.

Highly Concurrent Apache Async HTTP Client IOReactor issues

Application description :
I'm using Apache HTTP Async Client ( Version 4.1.1 ) Wrapped By Comsat's Quasar FiberHttpClient ( version 0.7.0 ) in order to run & execute a highly concurrent Java application that uses fibers to internally send http requests to multiple HTTP end-points
The Application is running on top of tomcat( however , fibers are used only for internal request dispatching. tomcat servlet requests are still handled the standard blocking way )
Each external request opens 15-20 Fibers internally , each fiber builds an HTTP request and uses the FiberHttpClient to dispatch it
I'm using a c44xlarge server ( 16 cores ) to test my application
The end-points i'm connecting to preempt keep-alive connections, meaning if I try to maintain by resusing sockets , conncetions get closed during requests execution attempts. Therefor , I disable connection recycling.
According to the above sections, here's the tunning for my fiber http client ( which of course I'm using a single instance of ):
PoolingNHttpClientConnectionManager connectionManager =
new PoolingNHttpClientConnectionManager(
new DefaultConnectingIOReactor(
IOReactorConfig.
custom().
setIoThreadCount(16).
setSoKeepAlive(false).
setSoLinger(0).
setSoReuseAddress(false).
setSelectInterval(10).
build()
)
);
connectionManager.setDefaultMaxPerRoute(32768);
connectionManager.setMaxTotal(131072);
FiberHttpClientBuilder fiberClientBuilder = FiberHttpClientBuilder.
create().
setDefaultRequestConfig(
RequestConfig.
custom().
setSocketTimeout(1500).
setConnectTimeout(1000).
build()
).
setConnectionReuseStrategy(NoConnectionReuseStrategy.INSTANCE).
setConnectionManager(connectionManager).
build();
ulimits for open-files are set super high ( 131072 for both soft and hard values )
Eden is set for 18GB , Total heap size is 24GB
OS Tcp stack is also well tuned :
kernel.printk = 8 4 1 7
kernel.printk_ratelimit_burst = 10
kernel.printk_ratelimit = 5
net.ipv4.ip_local_port_range = 8192 65535
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = 16777216
net.core.wmem_default = 16777216
net.core.optmem_max = 40960
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.core.netdev_max_backlog = 100000
net.ipv4.tcp_max_syn_backlog = 100000
net.ipv4.tcp_max_tw_buckets = 2000000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_fin_timeout = 10
net.ipv4.tcp_slow_start_after_idle = 0
net.ipv4.tcp_sack = 0
net.ipv4.tcp_timestamps = 1
Problem description
Under low-medium load all is well , connections are leased , cloesd and the pool replenishes
Beyond some concurrency point , the IOReactor Threads ( 16 of them ) seem to stop functioning properly, prior to dying.
I've written a small thread to get the pool stats and print them each second. At around 25K leased connections , actual data is not sent anymore over the socket connections , The Pending stat clibms to a sky-rocketing 30K pending connection requests as well
This situation persists and basically renders the application useless. At some point the I/O Reactor threads die, not sure when and I haven't been able to catch the exceptions so far
lsofing the java process , I can see it has tens of thousands of file descriptors , almost all of them are in CLOSE_WAIT ( which makes sense , as the I/O reactor thread die/stop functioning and never get to actually closing them
During the time the application breaks, the server is not heavily overloaded/cpu stressed
Questions
I'm guessing I am reaching some sort of boundary somewhere , though I'm rather clueless as to what or where it may reside. Except from the following
Is it possible I'm reaching an OS port ( all applicative requests are originating from a single internal IP after all) limits and creates an error that sends IO Reactor threads to die ( something similar to open files limit errors ) ?
Forgot to answer this, but I got what's going on roughly a week after posting the question :
There was some sort of miss-configuration that caused the io-reactor to spawn with only 2 threads.
Even after providing more reactor threads, the issue persisted. It turns out that our outgoing requests were mostly SSL. Apache SSL connection handling propagates the core handling to the JVM's SSL facilities which simply - are not efficient enough for handling thousands of SSL connections requests per second. Being more specific, some methods inside SSLEngine(If I recall correctly) are synchronized. doing thread-dumps under high loads shows the IORecator threads blocking each-other while trying to open SSL connections.
Even trying to create a pressure release valve in the form of connection lease-timeout didn't work because the backlogs created were to large, rendering the application useless.
Offloading SSL outgoing requests handling to nginx performed even worse - because the remote endpoints are terminating the requests preemptively, SSL client session cache could not be used ( same goes for the JVM implementation ).
Wound up putting a semaphore in-front of the entire module, limiting the whole thing to ~6000 at any given moment, which solved the issue.

Stop Socket with timeout from waiting after data read from socket

I am trying to create a java http server using tcp sockets. HTTP 1.1 has a timeout value that will enable the connection to be persistent and wait for a short while for possible data from the client. I am trying to implement this timer in my program by using:clientSocket.setSoTimeout(). Even though this will help to leave the connection open for a certain amount of time, but it will wait for that exact amount of time before allowing the next request to be read.
For example:
If timeout is set to 5 seconds,
Request 1 is read. Then the socket hangs and wait until 5 seconds is over.
Request 2 is read. The socket waits until 5 seconds is up again.
This proves to be a problem if my timeout is set to big values. This should not be the case as the request should be processed once it is received and the timeout should only expire only if no data is received throughout the specified duration.
Can anyone advise me on how I could resolve this?
Edit:
For people who face a similar problem, here is my solution:
Since the client waits until the timeout before receiving all the data, I guessed that the client does not know that all the data from the server has been received. Hence, I added a content-length field to the HTTP response packet. Now, my client no longer hangs after receiving the data. The setSoTimeout does indeed work as stated!
Ok, when you receive a connection, then please start a new Thread like this:
class ClientService extends Thread {
private final Socket clientSocket;
public ClientService(Socket clientSocket) {
this.clientSocket=clientSocket;
}
public void run() {
// do your work with the Socket clientSocket here
}
}
this is how then your server code should look like:
while (true) {
Socket clientSocket = server.accept();
new ClientService(clientSocket).start();
}
It will allow you to process responses without waiting for one another till it timeouts.
HTTP 1.1 has a timeout value that will enable the connection to be persistent and wait for a short while for possible data from the client.
Not really. It has a connection: keep-alive setting, which is the default behaviour, and it allows endpoints to close connections that aren't in use after a period of idleness, but it doesn't have a timeout property itself.
I am trying to implement this timer in my program by using:clientSocket.setSoTimeout().
This has nothing whatsoever to do with HTTP. It is a socket read timeout.
Even though this will help to leave the connection open for a certain amount of time, but it will wait for that exact amount of time before allowing the next request to be read.
No it won't. It will cause read methods to throw SocketTimeoutException if no data arrives within the timeout period. Nothing else.
For example:
If timeout is set to 5 seconds,
Request 1 is read. Then the socket hangs and wait until 5 seconds is over.
No it doesn't.
Request 2 is read. The socket waits until 5 seconds is up again.
No it doesn't. You've made all this up. It is fantasy.
This proves to be a problem if my timeout is set to big values.
It isn't a problem with any timeout values whether large or small, because it simply does not happen.
This should not be the case as the request should be processed once it is received and the timeout should only expire only if no data is received throughout the specified duration.
That is exactly what Socket.setSoTimeout() already does.
Your question is founded on a fallacy.

SocketTimeoutException when ConnectTimeout and ReadTimeout is infinite? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Receiving request timeout even though connect timeout and read timeout is set to default (infinite)?
I tried to connect to a web service and received a SocketTimeoutException after approximately 20 seconds. The Tomcat server hosting the web service is down so the Exception is expected. However, I did not set the value of my ConnectTimeout and ReadTimeout. According to the documentation, the default values of these two are infinite.
One possibility for this is that the server I tried connecting to has its own timeout. But when my friend tried to connect to it using iOS, his connection timed out after approximately 1 minute and 15 seconds. If the server is the one issuing the timeout, our connection should have timed out at almost the same time. Please note that he is also using the default time out of iOS.
Why did my socket timed out so early when my connect and read timeout are set to infinite?
Is socket timeout different to connect and read timeout? If so, how is it different?
How can I know the value of my socket timeout? I am using HttpURLConnection.
Is there a way to set the socket timeout? How?
Below is a snippet of my code:
httpURLConnection = (HttpURLConnection) ((new URL("http://www.website.com/webservice")).openConnection());
httpURLConnection.setDoInput(isDoInput);
httpURLConnection.setDoOutput(isDoOutput);
httpURLConnection.setRequestMethod(method);
try
{
OutputStreamWriter writer = new OutputStreamWriter(httpURLConnection.getOutputStream());
writer.write("param1=value1");
writer.flush;
}catch(Exception e)
{
}
Why did my socket timed out so early when my connect and read timeout are set to infinite?
Code please.
Is socket timeout different to connect and read timeout? If so, how is it different?
SocketTimeoutException is a read timeout.
How can I know the value of my socket timeout? I am using HttpURLConnection.
HttpURLConnection.getReadTimeout(); also HttpURLConnection.getConnectTimeout().
Is there a way to set the socket timeout? How?
HttpURLConnection.setReadTimeout().
You have already cited all these methods in your original post. Why are you asking about them here?
Finally, I found what causing my timeout! It turns out that it is indeed the server who is causing my timeout. I doubt on this one at first because I am receiving a different timeout when using iOS which is more than 1 minute.
So here it is:
The operating system holding my Tomcat server is Windows. Windows' default number of retries for unanswered connection is 2. So when your first attempt to connect fails, you still have 2 retries left. The retries are all done internally. I'm not sure how the time for each retry is calculated but basically it's 3 + 6 + 12 = 21 seconds.
1st retry = 3 seconds
2nd retry = 6 seconds
3rd retry = 12 seconds
After the 3rd retry, your connection will be cut-off. Also, by that time, you already waited for 21 seconds.

Categories