how to resurrect a jdbc connection after failure? - java

I am developing an program that uses transactions, for data integrity.
I have read about jdbc savepoints where one can rollback to a point where there was no error in the transaction.
But there is one situation, say the network fails, that makes the connection handling the transaction to be invalid.
Now i have managed to detect in my program whenever a network problem occurs, make appear a dialog that will block the ui for 2 minutes while trying to establish a connection to the db.
my question is, is there a way to save a transaction, not a savepoint, or re establish the connection where a network failure occured in during the transaction, so that if connection is re established, we can continue with the prevous queries?
why i want this is because my program will perform transactions over multiple modal dialogs before commit, say dialog A queries were successfull, when we go to dialog B before committing, if a network error occurs here we should be able to continue where we left off in dialog A.
Thanks in advance.

Suspending a database transaction during "user think-time" is bad news. A more appropriate strategy is to use "versioning".
This is done by adding a column to your record that increments with every update. When a user reviews the record to think about changes, the version is loaded. When the changes are applied, the version is compared to ensure it has not changed. If it has changed, you have a "soft deadlock", which can be addressed in a few ways:
Warn the user that another change has been made, and ask to confirm an overwrite
Present the user with a list of updates that occurred during their think-time, and if they want to overwrite them. This can be tricky because you should refresh the version number for the confirmation, recheck it if they choose to overwrite, and give them yet another warning/confirmation if something changed while they were deciding to overwrite. Your process should be designed to do this forever if necessary.
Prevent the update outright

If your main concern is data integrity and ODBC backend database is ACID-compliant, then you do not need to worry. Simply re-do the transaction after re-establishing the connection, and that will work just fine.

Related

How to SET LOCK MODE in java application

I am working on a Java web application that uses Weblogic to connect to an Informix database. In the application we have multiple threads creating records in a table.
It happens pretty often that it fails and the following error is thrown:
java.sql.SQLException: Could not do a physical-order read to fetch next row....
Caused by: java.sql.SQLException: ISAM error: record is locked.
I am assuming that both threads are trying to insert or update when the record is locked.
I did some research and found that there is an option to set the database that instead of throwing an error, it should wait for the lock to be released.
SET LOCK MODE TO WAIT;
SET LOCK MODE TO WAIT 17;
I don't think that there is an option in JDBC to use this setting. How do I go about using this setting in my java web app?
You can always just send that SQL straight up, using createStatement(), and then send that exact SQL.
The more 'normal' / modern approach to this problem is a combination of MVCC, the transaction level 'SERIALIZABLE', retry, and random backoff.
I have no idea if Informix is anywhere near that advanced, though. Modern DBs such as Postgres are (mysql does not count as modern for the purposes of MVCC/serializable/retry/backoff, and transactional safety).
Doing MVCC/Serializable/Retry/Backoff in raw JDBC is very complicated; use a library such as JDBI or JOOQ.
MVCC: A mechanism whereby transactions are shallow clones of the underlying data. 2 separate transactions can both read and write to the same records in the same table without getting in each other's way. Things aren't 'saved' until you commit the transaction.
SERIALIZABLE: A transaction level (also called isolationlevel), settable with jdbcDbObj.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE); - the safest level. If you know how version control systems work: You're asking the database to aggressively rebase everything so that the entire chain of commits is ordered into a single long line of events: Each transaction acts as if it was done after the previous transaction was completed. The simplest way to implement this level is to globally lock all the things. This is, of course, very detrimental to multithread performance. In practice, good DB engines (such as postgres) are smarter than that: Multiple threads can simultaneously run transactions without just being frozen and waiting for locks; the DB engine instead checks if the things that the transaction did (not just writing, also reading) is conflict-free with simultaneous transactions. If yes, it's all allowed. If not, all but one simultaneous transaction throw a retry exception. This is the only level that lets you do this sequence of events safely:
Fetch the balance of isaace's bank account.
Fetch the balance of rzwitserloot's bank account.
subtract €10,- from isaace's number, failing if the balance is insufficient.
add €10,- to rzwitserloot's number.
Write isaace's new balance to the db.
Write rzwitserloot's new balance to the db.
commit the transaction.
Any level less than SERIALIZABLE will silently fail the job; if multiple threads do the above simultaneously, no SQLExceptions occur but the sum of the balance of isaace and rzwitserloot will change over time (money is lost or created – in between steps 1 & 2 vs. step 5/6/7, another thread sets new balances, but these new balances are lost due to the update in 5/6/7). With serializable, that cannot happen.
RETRY: The way smart DBs solve the problem is by failing (with a 'retry' error) all but one transaction, by checking if all SELECTs done by the entire transaction are not affected by any transactions that been committed to the db after this transaction was opened. If the answer is yes (some selects would have gone differently), the transaction fails. The point of this error is to tell the code that ran the transaction to just.. start from the top and do it again. Most likely this time there won't be a conflict and it will work. The assumption is that conflicts CAN occur but usually do not occur, so it is better to assume 'fair weather' (no locks, just do your stuff), check afterwards, and try again in the exotic scenario that it conflicted, vs. trying to lock rows and tables. Note that for example ethernet works the same way (assume fair weather, recover errors afterwards).
BACKOFF: One problem with retry is that computers are too consistent: If 2 threads get in the way of each other, they can both fail, both try again, just to fail again, forever. The solution is that the threads twiddle their thumbs for a random amount of time, to guarantee that at some point, one of the two conflicting retriers 'wins'.
In other words, if you want to do it 'right' (see the bank account example), but also relatively 'fast' (not globally locking), get a DB that can do this, and use JDBI or JOOQ; otherwise, you'd have to write code to run all DB stuff in a lambda block, catch the SQLException, check the SqlState to see if it is indicating that you should retry (sqlstate codes are DB-engine specific), and if yes, rerun that lambda, after waiting an exponentially increasing amount of time that also includes a random factor. That's fairly complicated, which is why I strongly advise you rely on JOOQ or JDBI to take care of this for you.
If you aren't ready for that level of DB usage, just make a statement and send "SET LOCK MDOE TO WAIT 17;" as SQL statement straight up, at the start of opening any connection. If you're using a connection pool there is usually a place you can configure SQL statements to be run on connection start.
The Informix JDBC driver does allow you to automatically set the lock wait mode when you connect to the server.
Simply pass via the DataSource or connection URL the following parameter
IFX_LOCK_MODE_WAIT=17
The values for JDBC are
(-1) Wait forever
(0) not wait (default)
(> 0) wait this many seconds
See https://www.ibm.com/support/knowledgecenter/SSGU8G_14.1.0/com.ibm.jdbc.doc/ids_jdbc_040.htm
Connection conn = DriverManager.getConnection ( "jdbc:Informix-sqli://cleo:1550:
IFXHOST=cleo;PORTNO=1550;user=rdtest;password=my_passwd;IFX_LOCK_MODE_WAIT=17";);

OrientDB - ensure consistent state

I am not quite sure how to ask this question but I hope you get my drift...
I am using OrientDB as an embedded database that is used by a single application. I would like to ensure that should this application crash, the database is always in a consistent state so that my application can be started again without having to perform an maintenance on the database or loosing any data.
Ie so when I change the database and get a success message, I know that the changes have been written.
Is this support by OrientDB, if so, what is the option to enable?
(P.S. if I knew what the general accepted term for this kind of setup is called, I could search myself...)
OrientDB uses some kind of rollback journal which means that by default it logs all operations are performed with data stored on disk and put them into append only log. Records of this log are cached and flushed every second. If application crashes WAL (write ahead)/operation log will be read and all operations will be applied once again. Also WAL has notion of transactions which means that if transaction will not be finished at the time of crash all applied changes will be rolled back. So you can be sure about following in OrientDB:
All data the were written before one second interval before crash will be restored.
All data are written inside of transaction will be in consistent state.
You can lost part of the data in last one second interval.
Interval of flushes of WAL cache can be changed but it may lead to performance slowdown.

What happens to MySQL connection when internet connection is lost?

My application-server has a connection to a MySQL database. This connection is open 24/7.
Lets say my internet were to crash, or I were to block the port 3306.
Will JDBC throw an error? And if not, how should I be handling this kind of problem?
The reason I'm asking this is because I've had cases before where the MySQL connection randomly stopped working, but wasn't closed, which caused clients to time out.
You will get a MySQLNonTransientConnectionException or CommunicationsException. Typically, for program safety you either want to:
Open/close connections as necessary
Re-open when connection is closed
I recommend the former personally, especially when the database is user-specified (some mysql setups have a connection timeout).
Edit:
I did seem to forget to mention connection pools, per #ThorbjørnRavnAndersen , but that is also a viable solution. I personally don't do that myself, using an instantiable SQL connection per threaded operation.
Personally, I throw any database calls into try/catch blocks, because there's always the potential for issues to arise. You'll want to set some default values in the catch block, and if it happens to land on these defaults for whatever reason, display the end user some sort of (ideally pretty) error message.
Running a SQL query isn't always guaranteed to pull back any sort of results either...any number of things could go wrong - the server could be down, the connection could be so saturated that it's not able to pull the results in a timely manner, it could be sitting in the dead center of a backup of sorts too....These things have to always be accounted for, that's why I recommend doing it the way described above.
Lastly, to answer your question, yes, JDBC will in fact throw an error - under any of these circumstances.

postgresql error: canceling statement due to user request

What causes this error in postgresql?
org.postgresql.util.PSQLException: ERROR: canceling statement due to user request
My Software Versions:
PostgreSQL 9.1.6 on x86_64-redhat-linux-gnu, compiled by gcc (GCC) 4.7.2 20120921 (Red Hat 4.7.2-2), 64-bit".
My postgresql driver is: postgresql-9.2-1000.jdbc4.jar
Using java version: Java 1.7
Clue: My postgresql database is on a solid state hard drive and this error happens randomly and sometimes not at all.
We have figured out the the cause of this issue. It's explained by buggy implementation of setQueryTimeout() in latest JDBC drivers 9.2-100x. It might not happen if you open / close connection manually, but very often happens with connection pooling in place and autocommit set to false. In this case, setQueryTimeout() should be called with non-zero value (as an example, using Spring framework #Transactional( timeout = xxx ) annotation).
It turns out, whenever SQL exception is raised during the statement execution, the cancellation timer hasn't been cancelled and stays alive (that's how it is implemented). Because of pooling, connection behind is not closed but is returned to the pool.
Later on, when cancellation timer triggers, it randomly cancels the query currently associated with the connection this timer has been created with. At this moment, it's a totally different query which explains the randomness effect.
The suggested workaround is to give up on setQueryTimeout() and use PostgreSQL configuration instead (statement_timeout). It doesn't provide same level of flexibility but at least always works.
This assumes that the race condition bug in the jdbc jar file for postgresql is responsible for the above error.
Workaround 1, refresh connection to database periodically
One workaround is to close the connection to the database and create a new connection to the database periodically. After every few thousand sql statements just close the connection and re-create it. Then for some reason this error is no longer thrown.
Workaround 2, turn on logging
If you turn on logging at the JDBC driver level when you are setting the driver, then in some situations the race condition problem is neutralized:
Class.forName("org.postgresql.Driver");
org.postgresql.Driver.setLogLevel(org.postgresql.Driver.DEBUG);
Workaround 3, catch the exception and re-initialize connection
You could also try catching the specific exception, re-initializing the connection and trying the query again.
Workaround 4, wait until postgresql jdbc jar comes out with a bug fix
I think the problem may be associated with the speed of my SSD hard drive. If you get this error, please post how to reproduce it consistently here, there are devs very interested in squashing this bug.
If you are getting this error without using transactions
The user has requested the statement be cancelled. The statement is doing exactly what it is told to do. The question is, who requested this statement be cancelled?
Look at every line in your code which prepares the SQL for execution. You could have some method that applies to the statement which cancels the statement under some circumstances, like this:
statement = conn.createStatement();
conn.setAutoCommit(true);
statement.setQueryTimeout(25);
my_insert_statement.setString(1, "moobars");
my_insert_statement.executeUpdate();
statement.close();
In my case, what happened was I had set the query timeout to 25 seconds, and when the insert took longer than that. It passed the 'canceling statement due to user request' exception.
If you are getting this error while using transactions:
If you receive this Exception, double check all your code that does SQL transactions.
If you have a query that is in a transaction and you forget to commit, and then you use that connection to do something else where you operate as if you are not in a transaction, there could be undefined behavior which produces this Exception.
Make sure all code that does a transaction is cleaning up after itself. Make sure the transaction begins, work is done, more work is done, and the transaction is rolled back or committed, then make sure the connection is left in the autocommit=true state.
If this is your problem, then the Exception is not thrown where you have forgotten to clean up after yourself, it happens somewhere long after you have failed to clean up after a transaction, making this an elusive exception to track down. Refreshing the connection (closing it and getting a new one) will clear it up.
In addition to Eric's suggestions, you can see statement cancels when:
An adminisrator or another connection logged in as the same user uses pg_cancel_backend to ask your session to cancel its current statement
The administrator sends a signal to the PostgreSQL backend that's running your statement
The administrator requests a fast shutdown or restart of the PostgreSQL server
Check for cron jobs or load management tools that might be cancelling long-running queries.

Java Hibernate session and its scope?

I have just started using Hibernate with HSQLDB. In the tutorial they tell me not to use the anti-pattern "session-per-operation". However, each time I commit a transaction the session is closed as well. How am I supposed to avoid using getCurrentSession() if commit() closes the session?
I'm a bit curious on how people usually hande the scope of the session. I have seen several samples on building web applicatons where you have one session per request. In my case I'm building a service, and cannot apply the same idea. The service is running 24/7, and sporadically it does some database operations. Should I keep the database session alive all the time, and just use transactions as boundaries between operations (considering a case where my transaction commits do not close the session) or should I just create a new one for each operation (which is the anti-pattern, but how else?).
Thanks in advance!
That behaviour is determined by the implementation of CurrentSessionContext in use. The default happens to be ThreadLocalSessionContext which does close-on-commit, but you're by no means constrained to that.
You can configure/build any type of session scope you like by using ManagedSessionContext and binding/unbinding sessions at the appropriate beginning and end of life cycle. It seems to make sense for you that you would bind a Session at the entry to your service's unit of work and unbind it at the exit. (It is of course no trivial task to build robust code for doing this. Particularly remember that you're expected to make a new Session if an exception comes out of one of its methods.)
Responding to comment was getting too large for a comment.
That is the default behaviour because it's the only thing that's "safe" without additional work or configuration provided by the user. The "commit" is the only lifecycle point that Hibernate is able to "see" if you don't help it out, so it has to close there or risk the session being left dangling forever.
Determining potential session life cycle boundaries requires a fair bit of knowledge about what you're actually doing. "It's a background service" isn't much to go on. Assuming it does something like sit idling and wake up every X minutes, do some work, then go back to sleep for another X minutes, then that would be a good boundary to open then close a session.
You may be using an over-broad definition of 'operation' when talking about 'session per operation' being an anti-pattern.
They mean don't do something like (fictitious requirements for your service):
Wake Up Service
Open Session
Read File location from database
Close Session
Open file
Open Session
Update Database tables from current file state
Close Session
Open Session
Write Activity Log to Database
Close Session
Sleep Service
It would be perfectly reasonable to do that in one session then close it at the end. In a single threaded environment where you're managing everything yourself within known boundaries, you can really just open and close the session yourself without using currentSession if you want. You just need to make sure it gets closed in the event of an exception. If you're listening for an operating system event, the event handling would be a perfectly fine scope for the session.

Categories