mysql hosted DB, jdbc after a while the application freezes - java

I run a java programm and try to use it with a hosted DB, I run the Vserver ubuntu 12.04 with Mysql myself, full root-access.
I changed the my.cnf to have more resources.
When I start the application it is fast, hardly any difference to a local database.
My problem is that after a while of inactivity the program freezes probably because the connection dropped.
There is no entry in any errorlog. If I kill the application and restart it, it is working again, nothing else but kill works, it is a linuxPC.
I used ?autoReconnect=true but I am not sure this is correct, the tables are innodb.
Does anyone have an idea how to avoid the connection to drop or how to make sure a reconnection is made?
PS [17.12.2015]
?autoReconnect=true was removed
today I got some details after a long wait
com.openbravo.basic.BasicException:
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet successfully received from the server was 2.435.471 milliseconds ago. The last packet sent successfully to the server was 959.832 milliseconds ago.
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
Communications link failure
The last packet successfully received from the server was 2.435.471 milliseconds ago. The last packet sent successfully to the server was 959.832 milliseconds ago.
java.net.SocketException:
Is this what happens when wait_timeout is too short?
After the "error" the program worked again!

Your question lacks specific info, but I guess what is happen. MySQL Server has a parameter called wait_timeout (See official doc).
When you have a connection that exceed that timeout, MySQL will close it, and if you don't manage SQLExceptions properly, your application will have problems.
You can try to increase wait_timeout or review your connection code to manage exceptions, but both are workarounds.
I wouldn't recommend you to relay on autoReconnect as an alternative for any problem, instead it would be better to encapsulate connection management into business logic in order to manage open/close connection every time you need. Maybe connection pooling can help you.
I mean, when you call a business method from you UI (it doesn't mind if it is web, ws, desktop or whatever) you have to manage open connection and start transaction, (and other cross cutting concerns as authorization, audit, log, ...). During all businesses logic, control possible exceptions, commit or rollback and free resources.
If you post some code, you will get more specific answers.
Hope it helps!

autoReconnect is dangerous for InnoDB. When the connection is lost in the middle of a transaction the previous actions in the transaction are rolled back. But the code proceeds to run as if the transaction continues. This can lead to subsequent writes not being consistent with the rolled back data.
You would be better off recognizing the lost connection and restarting the transaction.

Related

JdbcSQLNonTransientConnectionException: Database may be already in use: "Waited for database closing longer than 1 minute"

We are using H2 started as database server process and listening on standard TCP/IP port 9092.
Our application is deployed in a Tomcat using connection pooling. We do a purge during idle time which at the end results in closing all connections to H2. From time to time we observe errors when the application tries to open the connection to H2 again:
SCHEDULERSERVICE schedule: Exception: Database may be already in use: "Waited for database closing longer than 1 minute". Possible solutions: close all other connection(s); use the server mode [90020-199]
org.h2.jdbc.JdbcSQLNonTransientConnectionException: Database may be already in use: "Waited for database closing longer than 1 minute". Possible solutions: close all other connection(s); use the server mode [90020-199]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:617)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:427)
at org.h2.message.DbException.get(DbException.java:205)
at org.h2.message.DbException.get(DbException.java:181)
at org.h2.engine.Engine.openSession(Engine.java:209)
at org.h2.engine.Engine.createSessionAndValidate(Engine.java:178)
at org.h2.engine.Engine.createSession(Engine.java:161)
at org.h2.server.TcpServerThread.run(TcpServerThread.java:160)
at java.lang.Thread.run(Thread.java:748)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:617)
at org.h2.engine.SessionRemote.done(SessionRemote.java:607)
at org.h2.engine.SessionRemote.initTransfer(SessionRemote.java:143)
at org.h2.engine.SessionRemote.connectServer(SessionRemote.java:431)
at org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:317)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:169)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:148)
at org.h2.Driver.connect(Driver.java:69)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
The problem occurs when the Tomcat connection pool closes all idle connection (unused) and one connection still in use is closed afterwards.
The next attempt to open a new connection fails, a retry is successfully after some wait time.
Under which circumstances does this exception happen?
What does the exception mean?
Are there any recommendations to follow to avoid the problem?
It sounds to me that H2 does a database close after the last connection has been closed.
When does the database close occure?
How can database closures been controlled?
Thx in advance
Thorsten
Embedded database in web applications needs careful handling of its lifecycle.
You can add a javax.servlet.ServletContextListener implementation (marked with #WebListener annotation or included into web.xml) and add explicit database shutdown to its contextDestroyed() methods.
You can force database shutdown here with connection.createStatement().execute("SHUTDOWN"). If your application needs to write something to database during unload, it should do it before that command.
Without the explicit shutdown H2 closes the database when all connections are closed, if some other behavior wasn't configured explicitly (with parameters in JDBC URL, for example). For example, DB_CLOSE_DELAY sets the additional delay, maybe your application uses that setting and therefore H2 doesn't close the database immediately, or application doesn't close all connections immediately.
Anyway, when you're trying to update the web application of the fly, Tomcat tries to initialize the new version before its old version is unloaded. If H2 is in classpath of the web application itself, the new version will be unable to connect to the database during short period of time when the new version is already online but the old version isn't unloaded yet.
If you don't like it, you can run the standalone H2 Server process and use remote connections to it in your web applications.
Another option is to move H2 to the classpath of Tomcat itself and configure the connection pool as resource in the server.xml, in that case it shouldn't be affected by the lifecycle of your applications.
In both these cases you shouldn't use the SHUTDOWN command.
UPDATED
With client-server connections to a remote server such exception means that server decided to close the database because there are no active connection. This operation can't be interrupted and reverted in the middle. On attempt to open a new connection to the same database during this process it waits at most for 1 minute for completion of this process to re-open the database again. This timeout is not configurable.
There are two possible solutions.
DB_CLOSE_DELAY setting can be used with some large value in seconds. When all connections are closed, database will stay online for the specified number of seconds. -1 also can be used to set an infinite timeout.
You can try to speed up the shutdown process, but you have to figure out what takes so much time by yourself. The file compaction procedure is limited to 200 milliseconds by default, it may take a longer time, but I think it shouldn't be that long. Maybe you have a lot of temporary objects or uncommitted data. Maybe you have a very high fragmentation of database file. It's hard to say what's going wrong without further investigation.

Keep database connection open or reopen it everytime it is needed?

I know this sort of question has been asked for a few times, I read some of them but did not get any smarter.
My Java application is connecting to a database server via JDBC through a SSH Tunnel. The tunnel is opened once at beginning. When starting I opened the database connection everytime it is used. Due to changes in the app I needed the connection opened on startup and decided to keep it open until my application is closed. When I close the app I sometimes, not always, get following error:
- Could not retrieve transation read-only status server
java.sql.SQLException: Could not retrieve transation read-only status server
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet successfully received from the server was 369.901 milliseconds ago. The last packet sent successfully to the server was 8 milliseconds ago.
Caused by: java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost.
Could this be because of the always open database connection? I test the application only for short times. It will run on 4 computers all day long. Can I expect this error more often then? The connection is used every few minutes, so it should be more performant to keep it open, but maybe during break it is not used for half an hour.
What would you recommend me to do? Always reopen the connection or keep it like it is and find a workaround when I get this error? Do you maybe have another idea why this error appears?
Just ask if you need more error log, database code or whatever you need.
Thanks!
I'm going to say yes, your issue is probably due to the fact that you have a persistent connection open.
I took over a website a while ago and the guy before me had the same idea: Open a single connection, send the queries through it when needed, and never close it. A month after I took over the site, the database wouldn't return any more query results just like this.
As a general rule of thumb and for good programming practice, always clean up after yourself. If you're not using a variable, set it to null and delete it. Not using a connection? Terminate it. This is way less prominent in Java than it is in C++ as Java does all the cleaning up for you for the most part.

What are the implications of running a query against a MySQL database via Hibernate without starting a transaction?

It seems to me that we have some code that is not starting a transaction yet for read-only operations our queries via JPA/Hibernate as well as straight SQL seem to work. A hibernate/jpa session would have been opened by our framework but for a few spots in legacy code we found no transactions were being opened.
What seems to end up happening is that the code usually runs as long as it does not use EntityManager.persist and EntityManager.merge. However once in awhile (maybe 1/10) times the servlet container fails with this error...
Failed to load resource: the server responded with a status of 500 (org.hibernate.exception.JDBCConnectionException: The last packet successfully received from the server was 314,024,057 milliseconds ago. The last packet sent successfully to the server was 314,024,057 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.)
As far as I can tell only the few spots in our application code that do not have transactions started before a query is made will have this problem. Does anybody else think it could be the non-transactional query running that causes this behaviour?
FYI here is our stack...
-Guice
-Guice-Persist
-Guice-Servlet
-MySql 5.1.63
-Hibernate/C3P0 4.1.4.Final
-Jetty
Yes, I think.
If you start a query without opening a transaction, this transaction will be opened automatically by the underlying layer. This connection, with an opened transaction, will be returned to the connection pool and given to another user, that will receive a connection to an already-opened transaction, and that could lead to inconsistent state.
Here in my company we had a lot of problems in the past with read-only non-transactional queries, and adjusted our framework to handle this.
Besides that, we talked to BoneCP developers and they accepted to develop a set of features to help handle this problem, like auto-rollback uncommitted transactions returned to the pool, and print a stack trace of what method forgot to commit the transaction.
This matter was discussed here:
http://jolbox.com/forum/viewtopic.php?f=3&t=98

java mysql server getting slower

I manage my connections by JDBC connection pool (BoneCP) and I always close the connection, the preparedStatement und the ResultSet.
But, when my programm is running for several days, the mysql-server gets slower and slower (for testing, I let my programm insert an entry every second). After 2 days, there were several seconds between the entries and that is why I think that the mysql server is getting slower and can handle the incomming transaction. Am I right?
The mysql server also uses much more of RAM and does not release the resources. So does anyone know, how I could find the error causing this behaviour? Thanks in advice!
Use the MySQL Workbench to detect open connections. It also gives you a host of options to see performance of your database server.
Also [I might be mistaken about this part of your question], when you say
I use connection pooling
why do you close the connection? Isn't that the opposite of the purpose of connection pooling?

Spring UncategorizedSQLException: ORA-01012

I am trying to retrieve data form an Oracle database using jdbc (ojdbc14.jar). I have a limited number of concurrent connections when connecting to the database and these connections are managed by Websphere connection pool.
Sometimes when I make the call I see an UncategorizedSQLException exception thrown in my logs with one of the following oracle codes:
ORA-01012 (not logged in) exception
ORA-17410 (connection timed out, socket empty),
ORA-02396 exceeded maximum idle time, please connect again
Other times I get no exceptions and it works fine.
Anyone understand what might be happening here?
In Websphere I have my cache statement size set to 10. Not sure if it is relevant in this situation, when it looks like the connection is being dropped.
It looks like the database is deciding to drop the connection. It's a good idea to write your code in a way that doesn't require that a connection be held forever. A better choice is to have the program connect to the database, do its work, and disconnect. This eliminates the problem of the database deciding to disconnect the application due to inactivity/server overload/whatever, and the program needing to figure this out and make a reasonable stab at reconnecting.
I hope this helps.

Categories