I have a java application which accesses a MySQL database. When testing locally all the connections are closed properly and not clogging up the amount of allowed active connections, but when I moved this over to a server the connections stopped closing and then they start to build up until their are too many connections.
Related
We are using H2 started as database server process and listening on standard TCP/IP port 9092.
Our application is deployed in a Tomcat using connection pooling. We do a purge during idle time which at the end results in closing all connections to H2. From time to time we observe errors when the application tries to open the connection to H2 again:
SCHEDULERSERVICE schedule: Exception: Database may be already in use: "Waited for database closing longer than 1 minute". Possible solutions: close all other connection(s); use the server mode [90020-199]
org.h2.jdbc.JdbcSQLNonTransientConnectionException: Database may be already in use: "Waited for database closing longer than 1 minute". Possible solutions: close all other connection(s); use the server mode [90020-199]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:617)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:427)
at org.h2.message.DbException.get(DbException.java:205)
at org.h2.message.DbException.get(DbException.java:181)
at org.h2.engine.Engine.openSession(Engine.java:209)
at org.h2.engine.Engine.createSessionAndValidate(Engine.java:178)
at org.h2.engine.Engine.createSession(Engine.java:161)
at org.h2.server.TcpServerThread.run(TcpServerThread.java:160)
at java.lang.Thread.run(Thread.java:748)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:617)
at org.h2.engine.SessionRemote.done(SessionRemote.java:607)
at org.h2.engine.SessionRemote.initTransfer(SessionRemote.java:143)
at org.h2.engine.SessionRemote.connectServer(SessionRemote.java:431)
at org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:317)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:169)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:148)
at org.h2.Driver.connect(Driver.java:69)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
The problem occurs when the Tomcat connection pool closes all idle connection (unused) and one connection still in use is closed afterwards.
The next attempt to open a new connection fails, a retry is successfully after some wait time.
Under which circumstances does this exception happen?
What does the exception mean?
Are there any recommendations to follow to avoid the problem?
It sounds to me that H2 does a database close after the last connection has been closed.
When does the database close occure?
How can database closures been controlled?
Thx in advance
Thorsten
Embedded database in web applications needs careful handling of its lifecycle.
You can add a javax.servlet.ServletContextListener implementation (marked with #WebListener annotation or included into web.xml) and add explicit database shutdown to its contextDestroyed() methods.
You can force database shutdown here with connection.createStatement().execute("SHUTDOWN"). If your application needs to write something to database during unload, it should do it before that command.
Without the explicit shutdown H2 closes the database when all connections are closed, if some other behavior wasn't configured explicitly (with parameters in JDBC URL, for example). For example, DB_CLOSE_DELAY sets the additional delay, maybe your application uses that setting and therefore H2 doesn't close the database immediately, or application doesn't close all connections immediately.
Anyway, when you're trying to update the web application of the fly, Tomcat tries to initialize the new version before its old version is unloaded. If H2 is in classpath of the web application itself, the new version will be unable to connect to the database during short period of time when the new version is already online but the old version isn't unloaded yet.
If you don't like it, you can run the standalone H2 Server process and use remote connections to it in your web applications.
Another option is to move H2 to the classpath of Tomcat itself and configure the connection pool as resource in the server.xml, in that case it shouldn't be affected by the lifecycle of your applications.
In both these cases you shouldn't use the SHUTDOWN command.
UPDATED
With client-server connections to a remote server such exception means that server decided to close the database because there are no active connection. This operation can't be interrupted and reverted in the middle. On attempt to open a new connection to the same database during this process it waits at most for 1 minute for completion of this process to re-open the database again. This timeout is not configurable.
There are two possible solutions.
DB_CLOSE_DELAY setting can be used with some large value in seconds. When all connections are closed, database will stay online for the specified number of seconds. -1 also can be used to set an infinite timeout.
You can try to speed up the shutdown process, but you have to figure out what takes so much time by yourself. The file compaction procedure is limited to 200 milliseconds by default, it may take a longer time, but I think it shouldn't be that long. Maybe you have a lot of temporary objects or uncommitted data. Maybe you have a very high fragmentation of database file. It's hard to say what's going wrong without further investigation.
I'm using UnboundID LDAP SDK for Java to query Active Directory using LDAP.
I'm creating the connection every time now so I'd like to change my code to use connections pool. I've found some examples how to create a connections pool, how to get and return the connection (link). I plan to do so using spring components on Mule ESB server but I'm worried about opened connections after my server or AD server restarts or communication problems.
How can I:
prevent my code from opened connections after my server restart
reconnect, recreate connections after AD communications problems
We have PostgreSQL 9.6 instance at a ubuntu 18.04 machine. When we restart java services deployed in a Kubernetes cluster then already existing idle connection didn't get remove and service create new connections on each restart. Due to this, we have reached a connection limit so many times and we have to terminate connection manually every time. Same service versions are deployed on other instances but we are not getting this scenario on other servers.
I have some questions regarding this
Can it be a PostgreSQL configuration issue? However, i didn't find any timeout-related setting difference in 2 instances (1 is working fine and another isnt)
If this is a java service issue then what should I check?
If its neither a PostgreSQL issue not a java issue then what should i look into?
If the client process dies without closing the database connection properly, it takes a while (2 hours by default) for the server to notice that the connection is dead.
The mechanism for that is provided by TCP and is called keepalive: after a certain idle time, the operating system starts sending keepalive packets. After a certain number of such packets without response, the TCP connection is closed, and the database backend process will die.
To make PostgreSQL detect dead connections faster, set the tcp_keepalives_idle parameter in postgresql.conf to less than 7200 seconds.
I have two application servers distinctly located in two different data centers, running the application in active-active mode. The application db is also hosted between the same two data center in active-passive mode. I am receiving connection reset errors from my application server which is on the other datacenter, when connecting to the DB. These connection reset errors are intermittent and no ORA/Java exception codes are provided with it. ht e datacenter diagram is provided here
enter image description here
java.sql.BatchUpdateException: Io exception: Connection reset
A network appliance somewhere between your app server and the Database may kill the socket based on inactivity. This happens with large connection pools where all connections aren't used frequently. It could be resolved by turning keep_alive on all the JDBC connections. To do so set the JDBC property "oracle.net.keepAlive" to "true".
I'm currently developing a Java Websocket application that is deployed on Wildfly 10. I cannot post the code, but here's the logic:
Multiple threads poll a database every 5 seconds(select query, reusing a PreparedStatement after closing previous ResultSet) and send via Websocket to all connected clients.
Have configured datasource that connects to MYSQL server (localhost).
The application runs fine until a while later, it crashes and the log is full with 'Unable to get managed connection from datasource' errors. Also, Websocket fails with 'ClosedChannelException'.
Services on the same server that open a connection and close it immediately do work fine. However there are 5-6 threads in the concerned code that must use connections after 5 secs so a thread is given a dedicated connection that is only torn down when the application context is destroyed.
Another thing is when the application fails, it works for a lesser time on disable-enable. Only a reboot gets it to work better.
Same project works without error on Glassfish.
Somehow, Wildfly seems to periodically reset either DB connections, or all TCP connections altogether.
Is there a setting that is relevant to Wildfly's behaviour towards threads? I have verified that only as many threads as are intended are actually created.
Any help would be appreciated.
Edit: This application works well on my local machine. When I deploy it on remote server, it works for a while (3 hours max) before failing altogether.
I use Netbeans 8 to compile, if that helps.