Websockets/threads/DB connections keep failing on Wildfly - java

I'm currently developing a Java Websocket application that is deployed on Wildfly 10. I cannot post the code, but here's the logic:
Multiple threads poll a database every 5 seconds(select query, reusing a PreparedStatement after closing previous ResultSet) and send via Websocket to all connected clients.
Have configured datasource that connects to MYSQL server (localhost).
The application runs fine until a while later, it crashes and the log is full with 'Unable to get managed connection from datasource' errors. Also, Websocket fails with 'ClosedChannelException'.
Services on the same server that open a connection and close it immediately do work fine. However there are 5-6 threads in the concerned code that must use connections after 5 secs so a thread is given a dedicated connection that is only torn down when the application context is destroyed.
Another thing is when the application fails, it works for a lesser time on disable-enable. Only a reboot gets it to work better.
Same project works without error on Glassfish.
Somehow, Wildfly seems to periodically reset either DB connections, or all TCP connections altogether.
Is there a setting that is relevant to Wildfly's behaviour towards threads? I have verified that only as many threads as are intended are actually created.
Any help would be appreciated.
Edit: This application works well on my local machine. When I deploy it on remote server, it works for a while (3 hours max) before failing altogether.
I use Netbeans 8 to compile, if that helps.

Related

JdbcSQLNonTransientConnectionException: Database may be already in use: "Waited for database closing longer than 1 minute"

We are using H2 started as database server process and listening on standard TCP/IP port 9092.
Our application is deployed in a Tomcat using connection pooling. We do a purge during idle time which at the end results in closing all connections to H2. From time to time we observe errors when the application tries to open the connection to H2 again:
SCHEDULERSERVICE schedule: Exception: Database may be already in use: "Waited for database closing longer than 1 minute". Possible solutions: close all other connection(s); use the server mode [90020-199]
org.h2.jdbc.JdbcSQLNonTransientConnectionException: Database may be already in use: "Waited for database closing longer than 1 minute". Possible solutions: close all other connection(s); use the server mode [90020-199]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:617)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:427)
at org.h2.message.DbException.get(DbException.java:205)
at org.h2.message.DbException.get(DbException.java:181)
at org.h2.engine.Engine.openSession(Engine.java:209)
at org.h2.engine.Engine.createSessionAndValidate(Engine.java:178)
at org.h2.engine.Engine.createSession(Engine.java:161)
at org.h2.server.TcpServerThread.run(TcpServerThread.java:160)
at java.lang.Thread.run(Thread.java:748)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:617)
at org.h2.engine.SessionRemote.done(SessionRemote.java:607)
at org.h2.engine.SessionRemote.initTransfer(SessionRemote.java:143)
at org.h2.engine.SessionRemote.connectServer(SessionRemote.java:431)
at org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:317)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:169)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:148)
at org.h2.Driver.connect(Driver.java:69)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
The problem occurs when the Tomcat connection pool closes all idle connection (unused) and one connection still in use is closed afterwards.
The next attempt to open a new connection fails, a retry is successfully after some wait time.
Under which circumstances does this exception happen?
What does the exception mean?
Are there any recommendations to follow to avoid the problem?
It sounds to me that H2 does a database close after the last connection has been closed.
When does the database close occure?
How can database closures been controlled?
Thx in advance
Thorsten
Embedded database in web applications needs careful handling of its lifecycle.
You can add a javax.servlet.ServletContextListener implementation (marked with #WebListener annotation or included into web.xml) and add explicit database shutdown to its contextDestroyed() methods.
You can force database shutdown here with connection.createStatement().execute("SHUTDOWN"). If your application needs to write something to database during unload, it should do it before that command.
Without the explicit shutdown H2 closes the database when all connections are closed, if some other behavior wasn't configured explicitly (with parameters in JDBC URL, for example). For example, DB_CLOSE_DELAY sets the additional delay, maybe your application uses that setting and therefore H2 doesn't close the database immediately, or application doesn't close all connections immediately.
Anyway, when you're trying to update the web application of the fly, Tomcat tries to initialize the new version before its old version is unloaded. If H2 is in classpath of the web application itself, the new version will be unable to connect to the database during short period of time when the new version is already online but the old version isn't unloaded yet.
If you don't like it, you can run the standalone H2 Server process and use remote connections to it in your web applications.
Another option is to move H2 to the classpath of Tomcat itself and configure the connection pool as resource in the server.xml, in that case it shouldn't be affected by the lifecycle of your applications.
In both these cases you shouldn't use the SHUTDOWN command.
UPDATED
With client-server connections to a remote server such exception means that server decided to close the database because there are no active connection. This operation can't be interrupted and reverted in the middle. On attempt to open a new connection to the same database during this process it waits at most for 1 minute for completion of this process to re-open the database again. This timeout is not configurable.
There are two possible solutions.
DB_CLOSE_DELAY setting can be used with some large value in seconds. When all connections are closed, database will stay online for the specified number of seconds. -1 also can be used to set an infinite timeout.
You can try to speed up the shutdown process, but you have to figure out what takes so much time by yourself. The file compaction procedure is limited to 200 milliseconds by default, it may take a longer time, but I think it shouldn't be that long. Maybe you have a lot of temporary objects or uncommitted data. Maybe you have a very high fragmentation of database file. It's hard to say what's going wrong without further investigation.

PostgreSQL JDBC Connection issue

We have PostgreSQL 9.6 instance at a ubuntu 18.04 machine. When we restart java services deployed in a Kubernetes cluster then already existing idle connection didn't get remove and service create new connections on each restart. Due to this, we have reached a connection limit so many times and we have to terminate connection manually every time. Same service versions are deployed on other instances but we are not getting this scenario on other servers.
I have some questions regarding this
Can it be a PostgreSQL configuration issue? However, i didn't find any timeout-related setting difference in 2 instances (1 is working fine and another isnt)
If this is a java service issue then what should I check?
If its neither a PostgreSQL issue not a java issue then what should i look into?
If the client process dies without closing the database connection properly, it takes a while (2 hours by default) for the server to notice that the connection is dead.
The mechanism for that is provided by TCP and is called keepalive: after a certain idle time, the operating system starts sending keepalive packets. After a certain number of such packets without response, the TCP connection is closed, and the database backend process will die.
To make PostgreSQL detect dead connections faster, set the tcp_keepalives_idle parameter in postgresql.conf to less than 7200 seconds.

Zombie connections after hosting

I have an application developed with Java using MySQL and C3P0 for connection pooling. The application works perfectly fine in localhost, the connection management was superb. However, I uploaded this application to Daily Razor "private JVM", and here we go, there were lot of MySQL connections than the application will ever make! Normally the application will make max 10 connections, but when I hosted there I can see 30 or more.
Apart from that, I always had number of mysql processors running in my localhost, but when uploaded to the online server I can only see 2. It is like upside down. The application works fine but there were number of times that I had to restart the server due to slow connection issue.
What is making this kind of thing? Anyway pls don't ask for code, because I don't know where the issue is

How to save time when opening JDBC MYSQL connections

I have a very short Java application that just opens a connection to a remote MySQL database, reads some data, prints it, and exits. The most time-consuming part of the application is the database connection.
Currently I have only a single thread, and my only concern is to save the time of opening the connection.
I thought of several ways to make it faster, but it turned out they do not help:
Connection Pooling - doesn't help because the pool lives only only during a single run of the application. When the application is terminated, the pool is gone, and when I re-run the application, I have to re-open all the connections in the pool.
mysql-proxy - connects only to the local server: mysql-proxy for a remote MySQL server
TCP/IP server - I thought of holding a local TCP/IP server that will keep a persistent open connection and send it to a TCP/IP client on request. However, Connection objects cannot be serialized, so I have no way to pass the Connection object from client to server.
Any other option?
Generally connection to a DB is a most time-consuming operation. If the application is to be started and stopped then there is little that you can do.
Using connection-pooling in a web-server and call that by running your app which talks to the web server using JSON might be an option.
You said you have a very short application so your 3rd option might work if you put the database logic into you "option 3 TCP/IP server" and just forward the results to your connecting client. This is a typical application server pattern.
Another thing you should consider about network look up https://stackoverflow.com/q/3641155/1055715 which Marc B has mentioned in his comment.
It turns out the best solution is to use mysql-proxy with a script that handles connection pooling (a combination of my first two options). I found one such script here:
http://forge.mysql.com/tools/tool.php?id=151
It was probably written for an older version of mysql-proxy, so I had to fix it (if anyone need the fixed version - write me).
It works like a charm - I run the exact same application as before, the only change is in the connection string: instead of connecting to "qa-srv:3308" (the remote server) I connect to "127.0.0.1:4040" (the proxy server).

Websphere application server 5.1 DataSource no longer valid when DB is rebooted

First of all, we are running a Java Web application running on WAS 5.1. Behind that, we use an Oracle data base. The problem that we're faced to is really simple, but after a couple of hours of Google search, I decided to ask you.
We have an application that is running on WAS. When we start the server, WAS sets his DataSource so that it points to the data base. Everything works fine, expect when the DBAs have to reboot the data base server. When they do, the data source is no longer valid and we have to manually restart all server and we are currently trying to correct that, if possible. We need to find a way to do it because we have 3 pre-production environnement for for our application, and there are two servers associated with it, one for the application and the other is a report generator web service. So, when the DBAs wants to reboot the server (and they usually don't tell us!) we have to reboot six servers. I was wondering if in Java, there was a way to reset the data source so that we don't need to restart the servers.
For you information, WebSphere is v5.1 and Oracle is 9g with Java 1.4.2.17.
We also use RAD:
Version: 6.0.1
Build id: 20050725_1800
You should configure your application server to always test the connection before leasing it out to a client. I'm not familiar with Websphere that much, but in WebLogic, you can set a jdbc sql statement such as select 1 from dual and the container removes stale connections from the connection pool.
Here is a link on how to do it in Websphere
http://www-01.ibm.com/support/docview.wss?uid=swg21439688
Based on what i read from your note, you should receive Stale connection exception as WAS has stale handles (in its pool) as the DB has been restarted.
The Data Source configuration can be configured to purge the entire pool once a stale connection is detected. The default policy is to purge the individual connection.
Adopting this would prevent you from restarting your WAS Servers.
There are a number of resources in this space
http://www-01.ibm.com/support/docview.wss?uid=swg21063645
HTH
Manglu

Categories