Running my application on Websphere Application Server 7 with SQLServer 2008 database. When the SQLServer is at 100% every connection is hanging and filling up the connection pool. This leads to every thread also hanging. And after 10 minutes the log is filled with this:
00000042 ThreadMonitor W WSVR0605W: Thread "WebContainer : 11" (00000049) has been active for 742352 milliseconds and may be hung. There is/are 14 thread(s) in total in the server that may be hung.
The connection pool is using JTDS and has a timeout set to 300 sec.
I would pressume that after 300sec every connection whould throw an exception which would then make all the threads un-hang?
Why would the connection throw an exception after 300 seconds?. If the connection object is in use, it will continue to be alive.
Also specify the exact time out attribute that you are referring to?
Here is the definition for Connection Timeout:
This value indicates the number of seconds that a connection request
waits when there are no connections available in the free pool and no
new connections can be created. This usually occurs because the
maximum value of connections in the particular connection pool has
been reached.
For example, if Connection timeout is set to 300, and the maximum
number of connections are all in use, the pool manager waits for 300
seconds for a physical connection to become available. If a physical
connection is not available within this time, the pool manager
initiates a ConnectionWaitTimeout exception.
So, it does not make 'all the threads un-hang', it only tells you after 300 secs that there were no free connections in the pool, so it can't give you one.
The parameter governing how long an transaction may stay active is called transaction timeout, after which the transaction is marked for rollback. But even this timeout does not cancel out the thread using the connection, it only marks it as rollback-only. In order to free connection you must use either use a third party tool (ITCAM can cancel any threads in the server), or terminate/drop connections from database side.
Related
I have a very basic spring boot 2.2.4 application that queries a downstream system using webclient with a blocking call. I am not doing any configuration of the webclient (setting timeouts, etc.), just using it "out of the box".
What I find is that the response time of the webclient call is either below 3 seconds or precisely 45 seconds, which I find very strange. Why if the response is slow, it is always 45 seconds?
The only reference to the 45 seconds I could find comes from the Reactor Netty documentation:
4.6. Connection Pool
By default, the TCP client uses a “fixed” connection pool with 500 as the maximum number of the channels and 45s as the acquisition timeout. This means that the implementation creates a new channel if someone tries to acquire a channel but none is in the pool. When the maximum number of the channels in the pool is reached, new tries to acquire a channel are delayed until a channel is returned to the pool again. The implementation uses FIFO order for channels in the pool. By default, there is no idle time specified for the channels in the pool.
Does anybody have any suggestion as to why my slow webclient calls always take 45 seconds to complete?
This is happening because of the fixed pool size which is 500. If all these connections (channels) are currently in use by existing requests, a new request for a connection would be queued until one of the 500 connections becomes free. If WebClient isn't able to acquire a connection within 45 seconds for a new request (since all of the 500 channels are still blocked by existing requests), it fails with an AcquireTimeout Exception. I'm assuming the ones that finished before 3 seconds are success ones while the 45 second ones are failures. Depending on the Throughput of your application, you can adjust the pool size accordingly.
In the past, when I've seen delays like this on successful connections, it was caused like this:
You try to connect with a domain name, so the client calls DNS first to get an address;
DNS returns address records for both IPv4 and IPv6
The client tries IPv6 first, but IPv6 is not properly configured on the network, and this fails after the IPv6 connection timeout, which can be in the 45s range.
The client then tries IPv4, which succeeds.
Whether or not this happens to you depends on your OS+version, your java version, and the configuration of your network.
Try connecting with in IPv4 IP address, like http://192.168.10.1/the/rest. If it's always fast then you probably had a problem like above. Unfortunately you can't usually connect this way with HTTPS, but depending on your OS you could try stuffing the correct v4 address in /etc/hosts.
Look here for more: https://serverfault.com/questions/548777/how-to-prevent-delays-associated-with-ipv6-aaaa-records
I think this is related to this netty issue:
https://github.com/reactor/reactor-netty/issues/1012
I will have a look at that first now.
Thanks for all the replies!
In our code (which runs as a schedule job via timer), we have threads running in parallel to perform a database operation. Problem here is each thread is initiating a connection via Hibernate factory. These connections are closed after every database action but stil gets stocked in the connection pool(as INACTIVE). All the connections gets released only after the job/main process is killed.Is there any way to release the connection even from connection pool after the database operation. When we use cron job instead of timer, the process gets killed automatically but we dont need cron here.
Kindly help us to resolve this as we are already nearing production release.
Note : We came to know about this when QA tested with heavy load on the job and for each load new connections are pulled.
You need to restrict the number of threads getting created in the thread pool.
dotConnect for Oracle uses connection pooling. The OracleConnection connection string has the Pooling parameter. If Pooling=true (the default value), the connection is not deleted after closing it, it is placed to the pool instead. When a new connection with the same connection string is opened, it is taken from the pool (if there are free connections) instead of the creating a new one. This provides significant performance improvements. If you use 800 connections that are connected for 10-15 seconds each, and there are only few different connection strings, you may not have 800 actual connections. Closed connections will be placed to the pool, and they will be taken from the pool when a new connection with the same connection string will open. No additional connection will open in such case.
You can disable Pooling by adding 'Pooling=false' to the connection string. In such case, a connection will be deleted from memory and free the session. However this may lead to performance loss.
Most likely, pooling should not cause creating too much sessions. Try testing your application with pooling on. If the session number will be too large, you can disable pooling.
For more information, please refer to http://www.devart.com/dotconnect/oracle/docs/FAQ.html#q54
I have found the root cause for the issue and have also found the solution.
The root cause was number of connections set as minimum and maximum and the time out parameter.
The minimum was 5 and max was 20 and timeout was 800 seconds. But out job was scheduled to run every minute. Due to the configuration, the connections were not released properly within minute.
Another issue was our code was not using the session factory as singleton, but was initializing for each thread. Since the resource was not shared, each session factory creates 5 connections by default and extended to 20 max. Since the timeout also was higher before the connections are released, next set of job starts and creates its own set of new connections.
Finally the pool gets full and oracle becomes unavailable.
We fixed this by sharing the session object across and also setting the timeout to lesser value so that connections are getting released from pool.
I am having an Spring MVC + Mysql (JDBC 4) + c3p0 0.9.2 project.
In c3p0 maxIdleTime value is 240 (i.e 4 mins.) and wait_timeout in my.ini of Mysql to 30 seconds.
According to c3p0
maxIdleTime:
(Default: 0)
Seconds a Connection can remain pooled but unused before being discarded. Zero means idle connections never expire.
According to Mysql
wait_timeout: The number of seconds the server waits for activity on a
noninteractive connection before closing it.
Now i am having some douts on this:(some answers are known to me,Just wated to be sure I am correct or not)
unused connection means the connection which are in sleep state according to mysql(?)
What is interactive and noninteractive connections?
Is unused connections and noninteractive coonections are same? because my DBA set wait_timeout to 30 seconds (he come to this value by observing DB server so that very less amount of connections be in sleep mode) this means an connection can be in sleep mode for 30 seconds after that it will be closed but at the otherhand c3p0's maxIdleTime is set to 240 seconds so whats this maxIdleTime setting playing role in this case.
What is interactive_timeout?
First Let's understand the mysql properties.
interactive_timeout : interactive time out for mysql shell sessions
in seconds like mysqldump or mysql command line tools. connections are in sleep state. Most of the time this is set to higher value because you don't want it to get disconnected while you are doing something on mysql cli.
wait_timeout
: the amount of seconds during inactivity that MySQL will wait before
it will close a connection on a non-interactive connection in
seconds. example: connected from java. connections are in sleep state.
Now let's understand c3po properties and it's relation with DB props.(I am just gonna copy from your question)
maxIdleTime: (Default: 0) Seconds a Connection can remain pooled but unused before being discarded. Zero means idle connections never
expire.
This refers to how long a connection object can be usable and will be available in pool. Once the timeout is over c3po will destroy it or recycle it.
Now the problem comes when you have maxIdleTime higher then the wait_timeout.
let's say if the mxIdleTime : 50 secs and wait_timeout : 40 s then there is a chanse that you will get Connection time out exception: Broken Pipe if you try to do any operation in last 10 seconds. So maxIdelTime should always be less then wait_timeout.
Instead of maxIdleTime you can you the following properties.
idleConnectionTestPeriod sets a limit to how long a connection will
stay idle before testing it. Without preferredTestQuery, the default
is DatabaseMetaData.getTables() - which is database agnostic, and
although a relatively expensive call, is probably fine for a
relatively small database. If you're paranoid about performance use a
query specific to your database (i.e. preferredTestQuery="SELECT 1")
maxIdleTimeExcessConnections will bring back the connectionCount back
down to minPoolSize after a spike in activity.
Please note that any of the pool property(eg. maxIdleTime) only affects to connection which are in pool i.e if hibernate has acquired a connection and keeps it idle for than maxIdleTime and then tries to do any operation then you will get "Broken Pipe"
It is good to have lower wait_timeout on mysql but It's not always right when you have an application already built.
You have to make sure before reducing it that in your application you are not keeping connection open for more that wait_time out.
You also have to consider that acquiring a connection is expensive task and if have wait time out too low then it beats the whole purpose of having connection pool, as it will frequently try to acquire connections.
This is especially important when you are not doing connection management manually for example when you use Spring transnational API. Spring starts transaction when you enter an #Transaction annotated method so it acquires a connection from pool. If you are making any web service call or reading some file which will take more time than wait_time out then you will get exception.
I have faced this issue once.
In one of my projects I had a cron which would do order processing for customers. To make it faster I used batch processing. Now once I retrieve a batch of customers and do some processing(no db calls). When I try to save all the orders I used to get broken pipe exception. The problem was my wait_timeout was 1 minute and order processing was taking more time then that. So We had to increase it to 2 minutes. I could have reduced the batch size but that was making the overall processing slower.
unused connection means the connection which are in sleep state according to mysql(?)
According to mysql, this simply means that a connection was established with mysql/db, but there has been no activity here for the past amount of time and due to configuration / settings of mysql(which can be changed), the connection was destroyed.
What is interactive and noninteractive connections?
Interactive connections are when your input hardware(keyboard) interacts using command line with mysql. In short where you write the queries
Non interactive or rather wait_timeout queries are those for which your code establishes connection with mysql.
Is unused connections and noninteractive coonections are same? because my DBA set wait_timeout to 30 seconds (he come to this value by observing DB server so that very less amount of connections be in sleep mode) this means an connection can be in sleep mode for 30 seconds after that it will be closed but at the otherhand c3p0's maxIdleTime is set to 240 seconds so whats this maxIdleTime setting playing role in this case.
MaxIdleTime is done by your code at hibernateJpa Configuration where you ask your code itself to close a hibernate connection(for example) after a connection is unused. You have ownership of this as a coder.
Wait_timeout on other hand is from mysql side. So it is upon the DB administrator to set it up and change.
What is interactive_timeout?
Again, interactive timeout is when you are writing queries after connecting to mysql from keyboard on command line and that time conf in mysql gets up.
If you want to know more about how to change these values, go through this link:
http://www.serveridol.com/2012/04/13/mysql-interactive_timeout-vs-wait_timeout/
Hope now it is clear to you.:)
I am facing RDS connectivity issues sometimes. So when this happen, all the connections in my connection pool become stale. With stale connections in the pool i am facing some high latency issues.
With testConnectionOnCheckout = true, How can set a timeout for getting a connection from c3p0's connection pool? Because when my application tries to obtain the connection from the pool, all the connections in the pool are first checked if they are stale and after that when pool is exhausted, then i get an exception after checkoutTimeout interval. SO if my checkout interval is 1000 ms and it takes around 100 ms to check stale connection and i have 30 connections in pool, my request would be stuck for 100*30+1000=4000 ms. Is there a way I can put a timeout to obtain a connection (doesn't matter if pool is exhausted or not).
With periodic connection checking, I face a weird issue in latency. I have set JDBC read timeout to 2000 ms. When I make calls to my API, some calls do throw an exception within 2000 ms but a few calls take around 15 sec or more which keeps my thread busy and make my service unavailable for those APIs which don't depend on database. The behavior about this high latency is every 6th call to the API takes longer time. I am not sure how could that happen.
Is there a way to close JDBC connections after a set timeout period? These connections are being created in a GenericObjectPool. I know the pool can close idle connections in the pool, but what about connections that are thought to be active? I am trying to control connections leaks in the event someone doesn’t call close(). From what I have read the only way may be to set a timeout period on the server, but I am hoping to find a way in Java. Thank you!
I agree with Peter Lawrey that I would make sure to close the connection always. But if I still have to ensure that a connection is closed (if someone took it from the pool and forgot to return it), I would do it as follows:
Decorate the java.sql.Connection that is returned by the pool.
Create a timer in the constructor and set it to duration configured, to tolerate with active connections.
If the user calls close on it before the timer fires, I will cancel the timer and return the connection to the pool.
If the timer fires before the connection is closed by the user, I will return the connection to the pool and invalidate the decorated connection so that further calls will throw IllegalStateException.
I was using the old Apache commons pool, but I switched over to the new Apache Tomcat Pool This actually has a feature to remove connections after a timeout period.
removeAbandoned - (boolean) Flag to remove abandoned connections if they exceed the removeAbandonedTimout. If set to true a connection is considered abandoned and eligible for removal if it has been in use longer than the removeAbandonedTimeout Setting this to true can recover db connections from applications that fail to close a connection. See also logAbandoned The default value is false.