Application continuity with Universal Connection Pool java JDBC Oracle 12c - java

I am trying to achieve application continuity with Oracle 12c database & Oracle UCP(Universal Connection Pool). As per the official documentation, I have implemented the following in my application. I am using ojdbc8.jar along with the equivalent ons.jar and the ucp.jar in my application.
PoolDataSource pds = oracle.ucp.jdbc.PoolDataSourceFactory.getPoolDataSource();
Properties as per oracle documentation:
pds.setConnectionFactoryClassName("oracle.jdbc.replay.OracleDataSourceImpl");
pds.setUser("username");
pds.setPassword("password");
pds.setInitialPoolSize(10);
pds.setMinPoolSize(10);
pds.setMaxPoolSize(20);
pds.setFastConnectionFailoverEnabled(true);
pds.setONSConfiguration("nodes=IP_1:ONS_PORT_NUMBER,IP_2:ONS_PORT_NUMBER");
pds.setValidateConnectionOnBorrow(true);
pds.setURL("jdbc:oracle:thin:#my_scan_name.my_domain_name.com:PORT_NUMBER/my_service_name");
// I have also tried using the TNS-Like URL as well. //
However, I am not able to acheive application continuity. I have some inflight transactions that I expect to replay when I bring down the RAC node on which my database service is running. What I observe is that my service migrates to the next available RAC node in the cluster, however, my in-flight transactions fail. What expect to happen over here is that the drivers will automatically restart the failed in-flight transactions. However, I dont see this happening. The queries that I fire are the database, sometimes I see them being triggered again on the database side, but we see Connection Closed Exception on the client side
According to some documentation, application continuity allows the application to mask outages from the user. My doubt here is whether my understanding that the application continuity will replay the SQL Statement that were in-flight when the outage occured is correct or is the the true meaning of application continuity something else.
I have refered to some blogs such as this,
https://martincarstenbach.wordpress.com/2013/12/13/playing-with-application-continuity-in-rac-12c/
The example mentioned here does not seem to be intended for replaying of in-flight SQL statements.
Is application continuity capable or replaying the in-flight SQL statements during a outage, or is FCF and application continuity only restore the state of the connection object and make it usable by the user post the outage has occured. If the earlier is true, then please guide me if I am missing anything in the application level settings in my code that is keeping me from achieving replay.

Yes your understanding is correct. With the replay driver, Application Continuity can replay in-flight work so that an outage is invisible to the application and the application can continue, hence the name of the feature. The only thing that's visible from the application is a slight delay on the JDBC call that hit the outage. What's also visible is an increase in memory usage on the JDBC side because the driver maintains a queue of calls. What happens under the covers is that, upon outage, your physical JDBC connection will be replaced by a brand new one and the replay driver will replay its queue of calls.
Now there could be cases where replay fails. For example replay will fail if the data has changed. Replay will also be disabled if you have multiple transactions within a "request". A "request" starts when a connection is borrowed from the pool and ends when it's returned back to the pool. Typically a "request" matches a servlet execution. If within this request you have more than one "commit" then replay will be disabled at runtime and the replay driver will stop queuing. Also note that auto-commit must be disabled.
[I'm part of the Oracle team that designed and implemented this feature]

I think the jdbc connection string could be your problem:
pds.setURL("jdbc:oracle:thin:#my_scan_name.my_domain_name.com:PORT_NUMBER/my_service_name");
You are using a so called EZConnect String but this is not supported with AC.
Alias (or URL) = (DESCRIPTION=
(CONNECT_TIMEOUT= 120)(RETRY_COUNT=20) RETRY_DELAY=3)(TRANSPORT_CONNECT_TIMEOUT=3)
(ADDRESS_LIST=(LOAD_BALANCE=on)
(ADDRESS=(PROTOCOL=TCP)(HOST=primary-scan)(PORT=1521)))
(ADDRESS_LIST=(LOAD_BALANCE=on)
(ADDRESS=(PROTOCOL=TCP)(HOST=secondary-scan)(PORT=1521)))
(CONNECT_DATA=(SERVICE_NAME=gold-cloud)))

Related

Does Oracle keeps query running in the background despite losing connection with the app?

Our app written in Java sends long-running queries to Oracle through JDBC API. It's inevitable that sometimes the app could crash or could get killed abruptly for plethora of reasons without giving it the chance to terminate the queries it has sent. When the app gets killed or stops, it also loses connection to Oracle.
Does Oracle DB keep the query running in the background even if it already has lost connection with the app that had sent the query?
Please cite sources.
When a connection between the database and the app is lost, Oracle will stop the session's queries and kill the session. But there are two potential exceptions:
Rollback must finish. From the Database Concepts manual: "A transaction ends when ... A client process terminates abnormally, causing the transaction to be implicitly rolled back using metadata stored in the transaction table and the undo segment." That rollback process cannot be stopped regardless of what happens to the connection. Even if you kill the database instance, when the instance restarts it will resume the rollback. As a general rule of thumb, the time to rollback will be about the same as the time the database spent running the DML. You just have to wait while Oracle puts itself back into a consistent state.
Zombie sessions. Although I don't have a reproducible test case for this problem, I'm sure every DBA has a story about sessions running after the client process disappeared, or even after they killed the session. Before you dismiss this concern as an old myth, note that the SQLNET.EXPIRE_TIME parameter was created for this scenario. Setting the value greater than 0 will have Oracle periodically check and clear terminated sessions. But you don't need to set this parameter unless you're having specific problems.

JdbcSQLNonTransientConnectionException: Database may be already in use: "Waited for database closing longer than 1 minute"

We are using H2 started as database server process and listening on standard TCP/IP port 9092.
Our application is deployed in a Tomcat using connection pooling. We do a purge during idle time which at the end results in closing all connections to H2. From time to time we observe errors when the application tries to open the connection to H2 again:
SCHEDULERSERVICE schedule: Exception: Database may be already in use: "Waited for database closing longer than 1 minute". Possible solutions: close all other connection(s); use the server mode [90020-199]
org.h2.jdbc.JdbcSQLNonTransientConnectionException: Database may be already in use: "Waited for database closing longer than 1 minute". Possible solutions: close all other connection(s); use the server mode [90020-199]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:617)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:427)
at org.h2.message.DbException.get(DbException.java:205)
at org.h2.message.DbException.get(DbException.java:181)
at org.h2.engine.Engine.openSession(Engine.java:209)
at org.h2.engine.Engine.createSessionAndValidate(Engine.java:178)
at org.h2.engine.Engine.createSession(Engine.java:161)
at org.h2.server.TcpServerThread.run(TcpServerThread.java:160)
at java.lang.Thread.run(Thread.java:748)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:617)
at org.h2.engine.SessionRemote.done(SessionRemote.java:607)
at org.h2.engine.SessionRemote.initTransfer(SessionRemote.java:143)
at org.h2.engine.SessionRemote.connectServer(SessionRemote.java:431)
at org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:317)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:169)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:148)
at org.h2.Driver.connect(Driver.java:69)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
The problem occurs when the Tomcat connection pool closes all idle connection (unused) and one connection still in use is closed afterwards.
The next attempt to open a new connection fails, a retry is successfully after some wait time.
Under which circumstances does this exception happen?
What does the exception mean?
Are there any recommendations to follow to avoid the problem?
It sounds to me that H2 does a database close after the last connection has been closed.
When does the database close occure?
How can database closures been controlled?
Thx in advance
Thorsten
Embedded database in web applications needs careful handling of its lifecycle.
You can add a javax.servlet.ServletContextListener implementation (marked with #WebListener annotation or included into web.xml) and add explicit database shutdown to its contextDestroyed() methods.
You can force database shutdown here with connection.createStatement().execute("SHUTDOWN"). If your application needs to write something to database during unload, it should do it before that command.
Without the explicit shutdown H2 closes the database when all connections are closed, if some other behavior wasn't configured explicitly (with parameters in JDBC URL, for example). For example, DB_CLOSE_DELAY sets the additional delay, maybe your application uses that setting and therefore H2 doesn't close the database immediately, or application doesn't close all connections immediately.
Anyway, when you're trying to update the web application of the fly, Tomcat tries to initialize the new version before its old version is unloaded. If H2 is in classpath of the web application itself, the new version will be unable to connect to the database during short period of time when the new version is already online but the old version isn't unloaded yet.
If you don't like it, you can run the standalone H2 Server process and use remote connections to it in your web applications.
Another option is to move H2 to the classpath of Tomcat itself and configure the connection pool as resource in the server.xml, in that case it shouldn't be affected by the lifecycle of your applications.
In both these cases you shouldn't use the SHUTDOWN command.
UPDATED
With client-server connections to a remote server such exception means that server decided to close the database because there are no active connection. This operation can't be interrupted and reverted in the middle. On attempt to open a new connection to the same database during this process it waits at most for 1 minute for completion of this process to re-open the database again. This timeout is not configurable.
There are two possible solutions.
DB_CLOSE_DELAY setting can be used with some large value in seconds. When all connections are closed, database will stay online for the specified number of seconds. -1 also can be used to set an infinite timeout.
You can try to speed up the shutdown process, but you have to figure out what takes so much time by yourself. The file compaction procedure is limited to 200 milliseconds by default, it may take a longer time, but I think it shouldn't be that long. Maybe you have a lot of temporary objects or uncommitted data. Maybe you have a very high fragmentation of database file. It's hard to say what's going wrong without further investigation.

Datasource Microsoft JDBC Driver for SQL Server (AlwaysOn Availability Groups)

I have a question related to the scenario when connecting from a Java application using the Microsoft JDBC Driver 4.0 to a SQL Server 2014 with AlwaysOn Availability Groups set up for high availability.
With this set up, we will be connecting to an availability group listener (specified in the db connecting string instead of any particular instance), so that the DB fail-over etc. is handled gracefully by the listener and it tries to connect to the next available instance behind the scenes if current primary goes down in the AG cluster.
Question(s) I have is,
In the data-source that is configured on the j2ee application server side (we use WebSphere), what happens to those connections already pooled by the data-source?
When a database goes down, though the AG listener would try to reconnect on the db side to the next available DB, will the AG Listener also through the jdbc driver send an event or something to the data-source created on the app server and make sure those connections that are already pooled by the datasource to be discarded and have it create new ones so that transactions on the application side wont fail (though they might for a while till new connections are created and fail over is successful) or the java application has to find out only after requesting it from the datasource?
WebSphere Application Server is able to cope with bad connections and removes them from the pool. Exactly when this happens depends on some configurable options and on how fully the Microsoft JDBC driver takes advantage of the javax.sql.ConnectionEventListener API to send notifications to the application server. In the ideal case where a JDBC driver sends the connectionErrorOccurred event immediately for all connections, WebSphere Application Server responds by removing all of these connections from the pool and by marking any connection that is currently in-use as bad so that it does not get returned to the pool once the application closes the handle. Lacking this, WebSphere Application Server will discover the first bad connection upon next use by the application. It is discovered either by a connectionErrorOcurred event that is sent by the JDBC driver at that time, or lacking that, upon inspecting the SQLState/error code of an exception for known indicators of bad connections. WebSphere Application Server then goes about purging bad connections from the pool according to the configured Purge Policy. There are 3 options:
Purge Policy of Entire Pool - all connections are removed from
the pool and in-use connections marked as bad so that they are not
pooled.
Purge Policy of Failing Connection Only - only the
specific connection upon which the error actually occurred is
removed from the pool or marked as bad and not returned to the pool
Purge Policy of Validate All Connections - all connections are
tested for validity (Connection.isValid API) and connections found
to be bad are removed from the pool or marked as bad and not
returned to the pool. Connections found to be valid remain in the
pool and continue to be used.
I'm not sure from your description if you are using WebSphere Application Server traditional or Liberty. If traditional, there is an additional option for pre-testing connections as they are handed out of the pool, but be aware that turning this on can have performance implications.
That said, the one thing to be aware of is that regardless of any of the above, your application will always need to be capable of handling the possibility of errors due to bad connections (even if the connection pool is cleared, connections can go bad while in use) and respond by requesting a new connection and retrying the operation in a new transaction.
Version 4 of that SQL Server JDBC driver is old and doesn't know anything about the always on feature.
Any data source connection pool can be configured to check the status of the connection from the pool prior to doling it out to the client. If the connection cannot be used the pool will create a new one. That's true of all vendors and versions. I believe that's the best you can do.

JDBC requests to Oracle 11g failing to be commited although apparently succeding

We have an older web-based application (Java with Spring 2.5.4 framework) running on a GlassFish 3.1 (build 43) server. This application was recently (a few weeks ago) re-directed to use an Oracle 11g (11.2.0.3.0) database and ojdbc6.jar/orai18n.jar (up from Oracle 10g 10.2.0.3.0 and ojdbc14.jar) -- using a JDBC Thin connection. The application is using org.apache.commons.dbcp.BasicDataSource version 1.2.2 for connections and the database requests are handled either through Spring jdbcTemplate (via the JdbcDaoSupport abstract class) or Spring's PlatformTransactionManager.
This morning we noticed that application users were able to enter information, modify it and later to retrieve and print that data through the application, but that there were no committed updates for the last 24 hours. This application currently has only a few users each day and they are apparently sharing the same connection which has been kept open by the connection pool during the last day and so their uncommitted updates were visible through the application, but not through other connections to the database. When the connection was closed, the uncommitted updates were lost.
Examining the server logs showed no errors from the time of the last committed changes to the database through the times of printed reports the next morning. In addition, even if some of the changes had been (somehow) made with the JDBC connection being set to Auto-Commit false, there were specific commits made for some of those updates that were part of a transaction which, as part of a try/catch block should have either executed one of the "transactionManager.commit(transactionStatus);" or "transactionManager.rollback(transactionStatus);" calls that must have been processed without error. It looks as though the commit was returning successfully, but no commit actually occurred.
Restarting the GlassFish domain and the application restored the normal operation with the various updates being committed as they are entered.
My question is has anyone seen or heard about anything like this occurring and, if so, what could have caused it?
Thank you for any ideas here -- we are at a loss.
Some new information:
Examination of our Oracle 11g Server showed that near the time that we believe that the commits appeared to stop, there were four operations blocked on some other operation that we were not able to fully resolve, but was probably an update.
Examination of the Glassfish Server logs showed that the appearance of the worker threads changed following this estimated start time and fewer threads were appearing in the log until only one thread continued to be used for several hours.
The problem occurred again about one week later and was caught after about 1/2 hour. At this time, there were two worker threads in operation.
The problem occurred due to a combination of two things. The first was a method that setup a Spring Transaction, but had an exit that bypassed both the TransactionManager.commit() and the TransactionManager.rollback() (as well as the several SQL requests making up the transaction). Although this was admittedly incorrect coding, in the past, this transaction was closed and therefore had no effect on subsequent usage.
The solution was to insure that the transaction was not started if there was nothing to be done; or, in general double check to make sure that all transactions, once started, are completed.
I am not certain of the exact how or why this problem began presenting itself, so the following is partly conjectured. Apparently, upgrading to Oracle 11g and/or switching to the ojdbc6.jar driver altered the earlier behavior of the incorrect code so that the transaction was not terminated and the connection auto-commit was left false. (It could also be due to some other change that we have not identified since the special case above happens rarely – but does happen.) The corresponding JDBC connection appears to be bound to a specific GlassFish worker thread (I will call this a 'bad' thread in the following as opposed to the normally acting 'good' threads). Whenever this 'bad' thread is used to handle an application request (for this particular application), changes are uncommitted and selects return dirty data. As time goes on, when a change is requested on a 'good' thread and JDBC connection that already has an uncommitted change made on the 'bad' thread, than the new request hangs and the worker thread also hangs. Eventually all but the 'bad' worker thread are hung and everything seems to work correctly from the application viewpoint, but nothing is ever committed.
Again, the solution was to correct the bad code.

What are the implications of running a query against a MySQL database via Hibernate without starting a transaction?

It seems to me that we have some code that is not starting a transaction yet for read-only operations our queries via JPA/Hibernate as well as straight SQL seem to work. A hibernate/jpa session would have been opened by our framework but for a few spots in legacy code we found no transactions were being opened.
What seems to end up happening is that the code usually runs as long as it does not use EntityManager.persist and EntityManager.merge. However once in awhile (maybe 1/10) times the servlet container fails with this error...
Failed to load resource: the server responded with a status of 500 (org.hibernate.exception.JDBCConnectionException: The last packet successfully received from the server was 314,024,057 milliseconds ago. The last packet sent successfully to the server was 314,024,057 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.)
As far as I can tell only the few spots in our application code that do not have transactions started before a query is made will have this problem. Does anybody else think it could be the non-transactional query running that causes this behaviour?
FYI here is our stack...
-Guice
-Guice-Persist
-Guice-Servlet
-MySql 5.1.63
-Hibernate/C3P0 4.1.4.Final
-Jetty
Yes, I think.
If you start a query without opening a transaction, this transaction will be opened automatically by the underlying layer. This connection, with an opened transaction, will be returned to the connection pool and given to another user, that will receive a connection to an already-opened transaction, and that could lead to inconsistent state.
Here in my company we had a lot of problems in the past with read-only non-transactional queries, and adjusted our framework to handle this.
Besides that, we talked to BoneCP developers and they accepted to develop a set of features to help handle this problem, like auto-rollback uncommitted transactions returned to the pool, and print a stack trace of what method forgot to commit the transaction.
This matter was discussed here:
http://jolbox.com/forum/viewtopic.php?f=3&t=98

Categories