When Database Mysql crashes , how do i know in java - java

I am using J2EE and a Mysql database and making a connection using JDBC. I want to know when the database crashes and how to handle it. Is it that the only way to check the database is to see if the connections establish successfully? Or there is other way?

You can't and shouldn't need to know. As far as your program is concerned, either everything works perfectly or you can't get a connection.
The program has to behave correctly if a connection can't be acquired, which in most cases probably involves some form of error display and ceasing of operations. Naturally you'll want to have some form of monitoring so that someone will know to check what's wrong with the database, but it's not your program's responsibility.

There are actually a lot of things that can go wrong in a database that might constitute a crash, a major failure or a partial failure. A table could get dropped by an admin, you could run out of space, etc, etc. These might allow you to get a connection or they might not.
It is probably going to be very difficult to distinguish between what constitutes a major failure (database down) and what is a partial failure that might allow the system to continue.
Obviously, if you can't connect to the database, you can't do much with your application. That might indicate the database has "crashed", but it could also be a network problem if your database in on another server. It doesn't really matter.
Most applications that I have worked on don't do much in terms of checking for major failures. But there are a few of strategies that might help:
Have some "sanity checks" (e.g., query a known table). If the sanity checks fail, you have a critical problem.
Have an error count. If more than a certain number of errors occurred in a given time, you have a critical problem.

How do I know in my Java code when my DB crashes?
As of JDBC 4.1 (JavaSE 7) there is the Connection.setNetworkTimeout(Executor, int) method which make JDBC operations cease execution and throw an SQLException if they take longer than the timeout you set.
To use this mechanism, set the network timeout to a number of milliseconds which you want to be the maximum amount of time you will wait to hear back from the DB. If any operations takes longer than the timeout you set, it will cease waiting and the method will throw an SQLException.
An example of how setNetworkTimeout could be used:
Connection conn = // get your connection however
conn.setNetworkTimeout(Executors.newSingleThreadExecutor(),
10 * 1000); // 10s timeout
Statement stmt = conn.createStatement();
// suppose DB crashes at this point in time
try {
stmt.execute("SOME SQL HERE");
} catch (SQLException sqle) {
// could be from network timeout,
// but could also be caused by something else
// You can check what your JDBC driver does
}
As you can see with this example, the fact that network timeouts only throw a SQLException makes the mechanism harder to work with. I really wish the architects of JDBC 4.1 had decided to require some subclass of SQLException be thrown so that there was a portable way of telling when a network timeout occurs. Something like SQLNetworkTimeoutException as a subclass of SQLException would have been nice.
In any case, your best bet for checking whether or not your SQLException was caused from a network timeout is by checking the message String on the SQLException, but be a bit careful doing this as different JDBC drivers will have different messages for network timeouts.
Another thing you could try in addition to or besides network timeout is checking for SQLNonTransientException. Typically the exception that will occur when there is some sort of catastrophic DB failure is the SQLNonTransientException which is the subclass of SQLException.
Essentially this exception means "something went wrong that is outside of the control of your code or the JDBC driver's". So generally when you see this, it's because of a DB failure or network failure.
How do I handle a DB failure once I know something went wrong?
This part is a bit tricky, especially since so many things can go wrong with a DB (it could have crashed, the network could be down/slow, etc).
Option A: Show that something went wrong
In most cases I've seen, the way to "handle" this sort of failure is to acknowledge it and just present it to the user.
Option B: Retry and/or failover
Another option is doing a fixed number of retries and/or failing over to a backup database. Failing over to a backup db can be tricky, and typically this is only done if you have some sort of middleware do it for you.

Related

Java SQLite: Is it necessary to call rollback when auto commit is false and a transaction fails?

When working with a SQLite database in Java, suppose I set auto commit to false. Is it necessary when a SQLException occurs that I call rollback() method? Or can I simply ignore calling it and the transaction will automatically be rolled back (all the changes I made during the transaction will be undone automatically)?
Quick answer: The fact that you're asking means you're doing it wrong, probably. However, if you must know: Yes, you need to explicitly rollback.
What is happening under the hood
At the JDBC level (and if you're using JOOQ, JDBI, Hibernate, or something similar, that's a library built on top of JDBC usually), you have a Connection instance. You'd have gotten this via DriverManager.getConnection(...) - or a connection pooler got it for you, but something did.
That connection can be in the middle of a transaction (auto-commit mode merely means that the connection assumes you meant to write an additional commit() after every SQL statement you care to run in that connection's context, that's all auto-commit does, but, obviously, if that's on, you probably are in a 'clean' state, that is, the last command processed by that connection was either COMMIT or ROLLBACK).
If it is in the middle of a transaction and you close the connection, the ROLLBACK is implicit.
The connection has to make a choice, it can't keep existing, so, it commits or rolls back. The spec guarantees it doesn't just commit for funsies on you, so, therefore, it rolls back.
The question then boils down to your specific setup. This, specifically, is dangerous:
try (Connection con = ...) {
con.setAutoCommit(false);
try {
try (var s = con.createStatement()) {
s.execute("DROP TABLE foobar");
}
} catch (SQLException ignore) {
// ignoring an exception usually bad idea. But for sake of example..
}
// A second statement on the same connection...
try (var s = con.createStatement()) {
s.execute("DROP TABLE quux");
}
}
A JDBC driver is, as far as the spec is concerned, free to throw an SQLException along the lines of 'the connection is aborted; you must explicitly rollback first then you can use it again' on the second statement.
However, the above code is quite bad. You cannot use transaction isolation level SERIALIZABLE at all with this kind of code (once you get more than a handful of users, the app will crash and burn in a cavalcade of retry exceptions), and it is either doing something useless (re-using 1 connection for multiple transactions when you have a connection pooler in use), or is solving a problem badly (the problem of: Using a new connection for every transaction is pricey).
1 transaction, 1 connection
The only reason the above was dangerous is because we're doing two unrelated things (namely: 2 transactions) in a single try-block associated with a connection object. We're re-using the connection. This is a bad idea: connections have baggage associated with them: Properties that were set, and, yes, being in 'abort' state (where an explicit ROLLBACK is required before the connection is willing to execute any other SQL). By just closing the connection and getting a new one, you ditch all that baggage. This is the kind of baggage that results in bugs that unit tests are not going to catch easily, a.k.a. bugs that, if they ever trigger, cost a ton of money / eyeballs / goodwill / time to fix. Objectively you must prefer 99 easy-to-catch bugs if it avoids a single 100x-harder-to-catch bug, and this is one of those bugs that falls in the latter category.
Connections are pricey? What?
There's one problem with that: Just use a connection for a single transaction and then hand it back, which thus eliminates the need to rollback, as the connection will do that automatically if you close() it: Getting connections is quite resource-heavy.
So, folks tend to / should probably be using a connection pooler to avoid this cost. Don't write your own here either; use HikariCP or something like it. These tools pool connections for you: Instead of invoking DriverManager.getConnection, you ask HikariCP for one, and you hand your connection back to HikariCP when you're done with it. Hikari will take care of resetting it for you, which includes rolling back if the connection is halfway inside a transaction, and tackling any other per-connection settings, getting it back to known state.
The common DB interaction model is essentially this 'flow':
someDbAccessorObject.act(db -> {
// do a single transaction here
});
and that's it. This code, under the hood, does all sorts of things:
Uses a connection pooler.
Sets up the connection in the right fashion, which primarily involves setting auto-commit to false, and setting the right transaction isolation level.
will COMMIT at the end of the lambda block, if no exceptions occurred. Hands back the connection in either case, back to the pool.
Will catch SQLExceptions and analyse if they are retry exceptions. If yes, does nagle's algorithm or some other randomized exponential backoff and reruns the lambda block (that's what retry exceptions mean).
Takes care of having the code that 'gets' a connection (e.g. determines the right JDBC url to use) in a single place, so that a change in db config does not entail going on a global search/replace spree in your codebase.
In that model, it is somewhat rare that you run into your problem, because you end up in a '1 transaction? 1 connection!' model. Ordinarily that's pricey (creating connections is far more expensive that rolling back/committing as usual and then just continuing with a new transaction on the same connection object), but it boils down to the same thing once a pooler is being used.
In other words: Properly written DB code should not have your problem unless you're writing a connection pooler yourself, in which case the answer is definitely: roll back explicitly.

What happens to MySQL connection when internet connection is lost?

My application-server has a connection to a MySQL database. This connection is open 24/7.
Lets say my internet were to crash, or I were to block the port 3306.
Will JDBC throw an error? And if not, how should I be handling this kind of problem?
The reason I'm asking this is because I've had cases before where the MySQL connection randomly stopped working, but wasn't closed, which caused clients to time out.
You will get a MySQLNonTransientConnectionException or CommunicationsException. Typically, for program safety you either want to:
Open/close connections as necessary
Re-open when connection is closed
I recommend the former personally, especially when the database is user-specified (some mysql setups have a connection timeout).
Edit:
I did seem to forget to mention connection pools, per #ThorbjørnRavnAndersen , but that is also a viable solution. I personally don't do that myself, using an instantiable SQL connection per threaded operation.
Personally, I throw any database calls into try/catch blocks, because there's always the potential for issues to arise. You'll want to set some default values in the catch block, and if it happens to land on these defaults for whatever reason, display the end user some sort of (ideally pretty) error message.
Running a SQL query isn't always guaranteed to pull back any sort of results either...any number of things could go wrong - the server could be down, the connection could be so saturated that it's not able to pull the results in a timely manner, it could be sitting in the dead center of a backup of sorts too....These things have to always be accounted for, that's why I recommend doing it the way described above.
Lastly, to answer your question, yes, JDBC will in fact throw an error - under any of these circumstances.

How to know why a jdbc connection became invalid

I would like to know friends, when a jdbc connection becomes invalid, it could be that it was closed intentionally, or some transaction made it invalid and closed it, is there a way to know what exactly made the connection invalid, is there any trace left on the connection that I can get and check.
I am faced with a situation where I have to detect a server that has gone offline, how I do this is; any operation that tries to borrow a connection from the connection pool, check if that connection is valid. If it is not, and if the reason for it not being valid is that it failed to connect to the database, then fire a propertychange and notify any subscriber to the change, the subscriber will then popup a dialog to block all operations and start querying the database every 5 seconds to check if it is back. Hope I made my situation clear
Better you could use a connection pooling library that will validate connection on behalf of you. During database operation it will automatically test the connection health and make a new connection if the existing one is invalid.
For C3P0 connection pooling library, please check the following document:
http://www.mchange.com/projects/c3p0/#configuring_connection_testing
Typically when you attempt to use a stale connection, for example by issuing a simple SQL statement as suggested by #PeterLawrey, an SQLException will be thrown, containing the details of what is wrong, if they are at all available to the driver. Catch the exception and analyze what is returned by its getErrorCode() and getSQLState() methods. Keep in mind that, while SQLSTATE values are standardized, they are not very granular, while vendor error codes may provide more information but will differ when connected to different database platforms.

postgresql error: canceling statement due to user request

What causes this error in postgresql?
org.postgresql.util.PSQLException: ERROR: canceling statement due to user request
My Software Versions:
PostgreSQL 9.1.6 on x86_64-redhat-linux-gnu, compiled by gcc (GCC) 4.7.2 20120921 (Red Hat 4.7.2-2), 64-bit".
My postgresql driver is: postgresql-9.2-1000.jdbc4.jar
Using java version: Java 1.7
Clue: My postgresql database is on a solid state hard drive and this error happens randomly and sometimes not at all.
We have figured out the the cause of this issue. It's explained by buggy implementation of setQueryTimeout() in latest JDBC drivers 9.2-100x. It might not happen if you open / close connection manually, but very often happens with connection pooling in place and autocommit set to false. In this case, setQueryTimeout() should be called with non-zero value (as an example, using Spring framework #Transactional( timeout = xxx ) annotation).
It turns out, whenever SQL exception is raised during the statement execution, the cancellation timer hasn't been cancelled and stays alive (that's how it is implemented). Because of pooling, connection behind is not closed but is returned to the pool.
Later on, when cancellation timer triggers, it randomly cancels the query currently associated with the connection this timer has been created with. At this moment, it's a totally different query which explains the randomness effect.
The suggested workaround is to give up on setQueryTimeout() and use PostgreSQL configuration instead (statement_timeout). It doesn't provide same level of flexibility but at least always works.
This assumes that the race condition bug in the jdbc jar file for postgresql is responsible for the above error.
Workaround 1, refresh connection to database periodically
One workaround is to close the connection to the database and create a new connection to the database periodically. After every few thousand sql statements just close the connection and re-create it. Then for some reason this error is no longer thrown.
Workaround 2, turn on logging
If you turn on logging at the JDBC driver level when you are setting the driver, then in some situations the race condition problem is neutralized:
Class.forName("org.postgresql.Driver");
org.postgresql.Driver.setLogLevel(org.postgresql.Driver.DEBUG);
Workaround 3, catch the exception and re-initialize connection
You could also try catching the specific exception, re-initializing the connection and trying the query again.
Workaround 4, wait until postgresql jdbc jar comes out with a bug fix
I think the problem may be associated with the speed of my SSD hard drive. If you get this error, please post how to reproduce it consistently here, there are devs very interested in squashing this bug.
If you are getting this error without using transactions
The user has requested the statement be cancelled. The statement is doing exactly what it is told to do. The question is, who requested this statement be cancelled?
Look at every line in your code which prepares the SQL for execution. You could have some method that applies to the statement which cancels the statement under some circumstances, like this:
statement = conn.createStatement();
conn.setAutoCommit(true);
statement.setQueryTimeout(25);
my_insert_statement.setString(1, "moobars");
my_insert_statement.executeUpdate();
statement.close();
In my case, what happened was I had set the query timeout to 25 seconds, and when the insert took longer than that. It passed the 'canceling statement due to user request' exception.
If you are getting this error while using transactions:
If you receive this Exception, double check all your code that does SQL transactions.
If you have a query that is in a transaction and you forget to commit, and then you use that connection to do something else where you operate as if you are not in a transaction, there could be undefined behavior which produces this Exception.
Make sure all code that does a transaction is cleaning up after itself. Make sure the transaction begins, work is done, more work is done, and the transaction is rolled back or committed, then make sure the connection is left in the autocommit=true state.
If this is your problem, then the Exception is not thrown where you have forgotten to clean up after yourself, it happens somewhere long after you have failed to clean up after a transaction, making this an elusive exception to track down. Refreshing the connection (closing it and getting a new one) will clear it up.
In addition to Eric's suggestions, you can see statement cancels when:
An adminisrator or another connection logged in as the same user uses pg_cancel_backend to ask your session to cancel its current statement
The administrator sends a signal to the PostgreSQL backend that's running your statement
The administrator requests a fast shutdown or restart of the PostgreSQL server
Check for cron jobs or load management tools that might be cancelling long-running queries.

JBoss AS 5 database connection pool re-connect routine for MS SQL Server

I'd like to come up with the best approach for re-connecting to MS SQL Server when connection from JBoss AS 5 to DB is lost temporarily.
For Oracle, I found this SO question: "Is there any way to have the JBoss connection pool reconnect to Oracle when connections go bad?" which says it uses an Oracle specific ping routine and utilizes the valid-connection-checker-class-name property described in JBoss' Configuring Datasources Wiki.
What I'd like to avoid is to have another SQL run every time a connection is pulled from the pool which is what the other property check-valid-connection-sql basically does.
So for now, I'm leaning towards an approach which uses exception-sorter-class-name but I'm not sure whether this is the best approach in the case of MS SQL Server.
Hoping to hear your suggestions on the topic. Thanks!
I am not sure it will work the way you describe it (transparently).
The valid connection checker (this can be either a sql statement in the *ds.xml file or a class that does the lifting) is meant to be called when a connection is taken from the pool, as the db could have closed it while it is in the pool. If the connection is no longer valid,
it is closed and a new one is requested from the DB - this is completely transparent to the application and only happens (as you say) when the connection is taken out of the pool. You can then use that in your application for a long time.
The exception sorter is meant to report to the application if e.g. ORA-0815 is a harmless or bad return code for a SQL statement. If it is a harmless one it is basically swallowed, while for a bad one it is reported to the application as an Exception.
So if you want to use the exception sorter to find bad connections in the pool, you need to be prepared that basically every statement that you fire could throw a stale-connection Exception and you would need to close the connection and try to obtain a new one. This means appropriate changes in your code, which you can of course do.
I think firing a cheap sql statement at the DB every now and then to check if a connection from the pool is still valid is a lot less expensive than doing all this checking 'by hand'.
Btw: while there is the generic connection checker sql that works with all databases, some databases provide another way of testing if the connection is good; Oracle has a special ping command for this, which is used in the special OracleConnectionChecker class you refer to. So it may be that there is something similar for MS-SQL, which is less expensive than a simple SQL statement.
I used successfully background validation properties: background-validation-millis from https://community.jboss.org/wiki/ConfigDataSources
With JBoss 5.1 (I don't know with other versions), you can use
<valid-connection-checker-class-name>org.jboss.resource.adapter.jdbc.vendor.MSSQLValidConnectionChecker</valid-connection-checker-class-name>

Categories