Make Java App resilient - java

We do have a distributed (note: not microservices) system which connects to one huge database (PostgresSQL - 9.3.13). We had in the last year different outages on this DB. The DB is kind of HA as well and the virtual IP floats correctly to the slave.
Our application is using Java, and as soon as the Master is not reachable anymore, our application starts to scream (via logs, which is fine) but unfortunately does not reconnect to the new master.
How can someone do this in a more resilient way? The best would be if the connection is kept active and the CP does connect to the new master as soon as it is available and (best case) all running updates/inserts are rolled back and started again on the new master.
For me the problem seems to be the Virtual IP and the connection pool plainly does not reconnect.
We do use DBCP2 as a connection pool. The data layer is hibernate.

Related

PostgreSQL JDBC Connection issue

We have PostgreSQL 9.6 instance at a ubuntu 18.04 machine. When we restart java services deployed in a Kubernetes cluster then already existing idle connection didn't get remove and service create new connections on each restart. Due to this, we have reached a connection limit so many times and we have to terminate connection manually every time. Same service versions are deployed on other instances but we are not getting this scenario on other servers.
I have some questions regarding this
Can it be a PostgreSQL configuration issue? However, i didn't find any timeout-related setting difference in 2 instances (1 is working fine and another isnt)
If this is a java service issue then what should I check?
If its neither a PostgreSQL issue not a java issue then what should i look into?
If the client process dies without closing the database connection properly, it takes a while (2 hours by default) for the server to notice that the connection is dead.
The mechanism for that is provided by TCP and is called keepalive: after a certain idle time, the operating system starts sending keepalive packets. After a certain number of such packets without response, the TCP connection is closed, and the database backend process will die.
To make PostgreSQL detect dead connections faster, set the tcp_keepalives_idle parameter in postgresql.conf to less than 7200 seconds.

Datasource Microsoft JDBC Driver for SQL Server (AlwaysOn Availability Groups)

I have a question related to the scenario when connecting from a Java application using the Microsoft JDBC Driver 4.0 to a SQL Server 2014 with AlwaysOn Availability Groups set up for high availability.
With this set up, we will be connecting to an availability group listener (specified in the db connecting string instead of any particular instance), so that the DB fail-over etc. is handled gracefully by the listener and it tries to connect to the next available instance behind the scenes if current primary goes down in the AG cluster.
Question(s) I have is,
In the data-source that is configured on the j2ee application server side (we use WebSphere), what happens to those connections already pooled by the data-source?
When a database goes down, though the AG listener would try to reconnect on the db side to the next available DB, will the AG Listener also through the jdbc driver send an event or something to the data-source created on the app server and make sure those connections that are already pooled by the datasource to be discarded and have it create new ones so that transactions on the application side wont fail (though they might for a while till new connections are created and fail over is successful) or the java application has to find out only after requesting it from the datasource?
WebSphere Application Server is able to cope with bad connections and removes them from the pool. Exactly when this happens depends on some configurable options and on how fully the Microsoft JDBC driver takes advantage of the javax.sql.ConnectionEventListener API to send notifications to the application server. In the ideal case where a JDBC driver sends the connectionErrorOccurred event immediately for all connections, WebSphere Application Server responds by removing all of these connections from the pool and by marking any connection that is currently in-use as bad so that it does not get returned to the pool once the application closes the handle. Lacking this, WebSphere Application Server will discover the first bad connection upon next use by the application. It is discovered either by a connectionErrorOcurred event that is sent by the JDBC driver at that time, or lacking that, upon inspecting the SQLState/error code of an exception for known indicators of bad connections. WebSphere Application Server then goes about purging bad connections from the pool according to the configured Purge Policy. There are 3 options:
Purge Policy of Entire Pool - all connections are removed from
the pool and in-use connections marked as bad so that they are not
pooled.
Purge Policy of Failing Connection Only - only the
specific connection upon which the error actually occurred is
removed from the pool or marked as bad and not returned to the pool
Purge Policy of Validate All Connections - all connections are
tested for validity (Connection.isValid API) and connections found
to be bad are removed from the pool or marked as bad and not
returned to the pool. Connections found to be valid remain in the
pool and continue to be used.
I'm not sure from your description if you are using WebSphere Application Server traditional or Liberty. If traditional, there is an additional option for pre-testing connections as they are handed out of the pool, but be aware that turning this on can have performance implications.
That said, the one thing to be aware of is that regardless of any of the above, your application will always need to be capable of handling the possibility of errors due to bad connections (even if the connection pool is cleared, connections can go bad while in use) and respond by requesting a new connection and retrying the operation in a new transaction.
Version 4 of that SQL Server JDBC driver is old and doesn't know anything about the always on feature.
Any data source connection pool can be configured to check the status of the connection from the pool prior to doling it out to the client. If the connection cannot be used the pool will create a new one. That's true of all vendors and versions. I believe that's the best you can do.

Java application connecting to replicated mysql databases

We have a java application that connects to mysql database. We are now going to change the database to replication mode by adding another DB instance. The idea is to provide DB high availability. The application should be able to switch to the standby DB in case it is unable to connect to the main database. One way to implement this is to maintain 2 sets of connections, keep monitoring both the databases and in case the app is unable to connect to the main DB, switch to the next set of connections and continue.
My question is is there a transparent way of switching the connections through mysql connector itself? OR is there any utility app that can sit between my app and the mysql connector and do this job?
To clarify , we are planning to do master - master replication. Both writes and reads happen frequently.
Yes, Connector/J (the MySQL JDBC driver) offers connection failover.
It's not trivial to set up, though. This should get you started.
http://dev.mysql.com/doc/refman/5.5/en/connector-j-usagenotes-j2ee-concepts-load-balancing-failover.html
#Mike Brant's point is good. If your data is write-infrequently / read-frequently, then you are best off only having your app write to a master DBMS, and read from a pool of slave DBMSs. It's good programming practice to use a different Connection for the parts of your app that write, and for those parts that will read the data. You can set up the read-only Connection with loadbalancing and failover, while leaving the write Connection pointing to the master.

How to save time when opening JDBC MYSQL connections

I have a very short Java application that just opens a connection to a remote MySQL database, reads some data, prints it, and exits. The most time-consuming part of the application is the database connection.
Currently I have only a single thread, and my only concern is to save the time of opening the connection.
I thought of several ways to make it faster, but it turned out they do not help:
Connection Pooling - doesn't help because the pool lives only only during a single run of the application. When the application is terminated, the pool is gone, and when I re-run the application, I have to re-open all the connections in the pool.
mysql-proxy - connects only to the local server: mysql-proxy for a remote MySQL server
TCP/IP server - I thought of holding a local TCP/IP server that will keep a persistent open connection and send it to a TCP/IP client on request. However, Connection objects cannot be serialized, so I have no way to pass the Connection object from client to server.
Any other option?
Generally connection to a DB is a most time-consuming operation. If the application is to be started and stopped then there is little that you can do.
Using connection-pooling in a web-server and call that by running your app which talks to the web server using JSON might be an option.
You said you have a very short application so your 3rd option might work if you put the database logic into you "option 3 TCP/IP server" and just forward the results to your connecting client. This is a typical application server pattern.
Another thing you should consider about network look up https://stackoverflow.com/q/3641155/1055715 which Marc B has mentioned in his comment.
It turns out the best solution is to use mysql-proxy with a script that handles connection pooling (a combination of my first two options). I found one such script here:
http://forge.mysql.com/tools/tool.php?id=151
It was probably written for an older version of mysql-proxy, so I had to fix it (if anyone need the fixed version - write me).
It works like a charm - I run the exact same application as before, the only change is in the connection string: instead of connecting to "qa-srv:3308" (the remote server) I connect to "127.0.0.1:4040" (the proxy server).

JDBC Connection Link Failure - How to fail over?

I have a stand-alone Java windows application developed based on Swing. It connects to a MySQL database for data storage. In case the database connection fails, I am getting a link failure exception from the MySQL JDBC driver (MySQLNonTransientConnectionException). I don't want to re-instantiate my database connection object or the whole program in case such a link failure issue happens. I just want to tell the user to try again later without having to restart the entire application. If the user is asked to restart the entire application, that would probably give a negative impression on the quality of the program. What do you think would be the preferred way for a standard java application to fail-over after such a database link failure without having to re-instantiate all the communication objects? Thanks in advance.
Use a Connection Pool (such as C3PO or DBCP). Your application takes the Connections from the pool, executes the statement(s) and puts the Connection back into the pool. The pool can be configured to test the JDBC Connections. For example, if they become stale, they can be automatically reinstantiated by the pool.
If your application takes the Connection from the pool, it will be a valid Connection. Let the pool handle the management of valid/invalid/stale JDBC Connections.

Categories