Make DB fail deterministically for testing - java

I have a Java application that uses lots of java.sql.Connection to a database.
I want to test that, if the database is unavailable, my services return the appropriate error codes (distinguishing between temporary and permanent problems e.g. HTTP 500 and 503).
For testing, my application connects to an embedded, local, in-memory h2 database; the application is not aware of this, only my integration test is.
How can I make writes to the database fail deterministically, e.g. hook into commits and make them throw a custom SQLException? I want a global 'database is unavailable' boolean in the test code that affects all connections and makes my application exercise its reconnect logic.
(I had started by proxying Connection and putting an if(failFlag) throw new MySimulateFailureException() in commit(); but this didn't catch PreparedStatement.executeUpdate(); before I embark on proxying the PreparedStatement too - its a lot of methods! - I'd like to be taught a better way...)

I think this is a good candidate for using aspects. With eg. Spring it is supremely easy to pointcut entire packages or just certain methods that you wish to fail - specifically you could have a before advice always throwing a ConnectException or do something more advanced with the around advice.
Cheers,

I ended up making my own Java reflection wrapper that intercepts Connection.commit and the PreparedStatement.execute... methods.
My final code in my 'DBFactory' class:
#SuppressWarnings("serial")
public class MockFailureException extends SQLException {
private MockFailureException() {
super("The database has been deliberately faulted as part of a test-case");
}
}
private class MockFailureWrapper implements InvocationHandler {
final Object obj;
private MockFailureWrapper(Object obj) {
this.obj = obj;
}
#Override public Object invoke(Object proxy, Method m, Object[] args) throws Throwable {
if(dbFailure && ("commit".equals(m.getName()) || m.getName().startsWith("execute")))
throw new MockFailureException();
Object result;
try {
result = m.invoke(obj, args);
if(result instanceof PreparedStatement)
result = java.lang.reflect.Proxy.newProxyInstance(
result.getClass().getClassLoader(),
result.getClass().getInterfaces(),
new MockFailureWrapper(result));
} catch (InvocationTargetException e) {
throw e.getTargetException();
} catch (Exception e) {
throw new RuntimeException("unexpected invocation exception: " + e.getMessage());
}
return result;
}
}
public Connection newConnection() throws SQLException {
Connection connection = DriverManager.getConnection("jdbc:h2:mem:"+uuid+";CREATE=TRUE;DB_CLOSE_ON_EXIT=FALSE");
return (Connection)java.lang.reflect.Proxy.newProxyInstance(
connection.getClass().getClassLoader(),
connection.getClass().getInterfaces(),
new MockFailureWrapper(connection));
}

Related

Automatic retry of transactions/requests in Dropwizard/JPA/Hibernate

I am currently implementing a REST API web service using the Dropwizard framework together with dropwizard-hibernate respectively JPA/Hibernate (using a PostgreSQL database).
I have a method inside a resource which I annotated with #UnitOfWork to get one transaction for the whole request.
The resource method calls a method of one of my DAOs which extends AbstractDAO<MyEntity> and is used to communicate retrieval or modification of my entities (of type MyEntity) with the database.
This DAO method does the following: First it selects an entity instance and therefore a row from the database. Afterwards, the entity instance is inspected and based on its properties, some of its properties can be altered. In this case, the row in the database should be updated.
I didn't specify anything else regarding caching, locking or transactions anywhere, so I assume the default is some kind of optimistic locking mechanism enforced by Hibernate.
Therefore (I think), when deleting the entity instance in another thread after selecting it from the database in the current one, a StaleStateException is thrown when trying to commit the transaction because the entity instance which should be updated has been deleted before by the other thread.
When using the #UnitOfWork annotation, my understanding is that I'm not able to catch this exception, neither in the DAO method nor in the resource method.
I could now implement an ExceptionMapper<StaleStateException> for Jersey to deliver a HTTP 503 response with a Retry-After header or something like that to the client to tell it to retry its request.
But I'd rather first like to retry to request/transaction (which is basically the same here because of the #UnitOfWork annotation) while still on the server.
Is there any example implementation for a server-sided transaction retry mechanism when using Dropwizard? Like retrying a configurable amount of times (e.g. 3) and then failing with an exception/HTTP 503 response.
How would you implement this? First thing that came to my mind is another annotation like #Retry(exception = StaleStateException.class, count = 3) which I could add to my resource.
Any suggestions on this?
Or is there an alternative solution to my problem considering different locking/transaction-related things?
Alternative approach to this is to use an injection framework - in my case guice - and use method interceptors for this. This is a more generic solution.
DW integreates with guice very smoothly through https://github.com/xvik/dropwizard-guicey
I have a generic implementation that can retry any exception. It works, as yours, on an annotation, as follows:
#Target({ElementType.TYPE, ElementType.METHOD})
#Retention(RetentionPolicy.RUNTIME)
public #interface Retry {
}
The interceptor then does (with docs):
/**
* Abstract interceptor to catch exceptions and retry the method automatically.
* Things to note:
*
* 1. Method must be idempotent (you can invoke it x times without alterint the result)
* 2. Method MUST re-open a connection to the DB if that is what is retried. Connections are in an undefined state after a rollback/deadlock.
* You can try and reuse them, however the result will likely not be what you expected
* 3. Implement the retry logic inteligently. You may need to unpack the exception to get to the original.
*
* #author artur
*
*/
public abstract class RetryInterceptor implements MethodInterceptor {
private static final Logger log = Logger.getLogger(RetryInterceptor.class);
#Override
public Object invoke(MethodInvocation invocation) throws Throwable {
if(invocation.getMethod().isAnnotationPresent(Retry.class)) {
int retryCount = 0;
boolean retry = true;
while(retry && retryCount < maxRetries()) {
try {
return invocation.proceed();
} catch(Exception e) {
log.warn("Exception occured while trying to executed method", e);
if(!retry(e)) {
retry = false;
} {
retryCount++;
}
}
}
}
throw new IllegalStateException("All retries if invocation failed");
}
protected boolean retry(Exception e) {
return false;
}
protected int maxRetries() {
return 0;
}
}
A few things to note about this approach.
The retried method must be designed to be invoked multiple times without any result altering (e.g. if the method stores temporary results in forms of increments, then executing twice might increment twice)
Database exceptions are generally not save for retry. They must open a new connection (in particular when retrying deadlocks which is my case)
Other than that this base implementation simply catches anything and then delegates the retry count and detection to the implementing class. For example, my specific deadlock retry interceptor:
public class DeadlockRetryInterceptor extends RetryInterceptor {
private static final Logger log = Logger.getLogger(MsRetryInterceptor.class);
#Override
protected int maxRetries() {
return 6;
}
#Override
protected boolean retry(Exception e) {
SQLException ex = unpack(e);
if(ex == null) {
return false;
}
int errorCode = ex.getErrorCode();
log.info("Found exception: " + ex.getClass().getSimpleName() + " With error code: " + errorCode, ex);
return errorCode == 1205;
}
private SQLException unpack(final Throwable t) {
if(t == null) {
return null;
}
if(t instanceof SQLException) {
return (SQLException) t;
}
return unpack(t.getCause());
}
}
And finally, i can bind this to guice by doing:
bindInterceptor(Matchers.any(), Matchers.annotatedWith(Retry.class), new MsRetryInterceptor());
Which checks any class, and any method annotated with retry.
An example method for retry would be:
#Override
#Retry
public List<MyObject> getSomething(int count, String property) {
try(Connection con = datasource.getConnection();
Context c = metrics.timer(TIMER_NAME).time())
{
// do some work
// return some stuff
} catch (SQLException e) {
// catches exception and throws it out
throw new RuntimeException("Some more specific thing",e);
}
}
The reason I need an unpack is that old legacy cases, like this DAO impl, already catch their own exceptions.
Note also how the method (a get) retrieves a new connection when invoked twice from my datasource pool, and how no modifications are done inside it (hence: safe to retry)
I hope that helps.
You can do similar things by implementing ApplicationListeners or RequestFilters or similar, however I think this is a more generic approach that could retry any kind of failure on any method that is guice bound.
Also note that guice can only intercept methods when it constructs the class (inject annotated constructor etc.)
Hope that helps,
Artur
I found a pull request in the Dropwizard repository that helped me. It basically enables the possibility of using the #UnitOfWork annotation on other than resource methods.
Using this, I was able to detach the session opening/closing and transaction creation/committing lifecycle from the resource method by moving the #UnitOfWork annotation from the resource method to the DAO method which is responsible for the data manipulation which causes the StaleStateException.
Then I was able to build a retry mechanism around this DAO method.
Examplary explanation:
// class MyEntityDAO extends AbstractDAO<MyEntity>
#UnitOfWork
void tryManipulateData() {
// Due to optimistic locking, this operations cause a StaleStateException when
// committed "by the #UnitOfWork annotation" after returning from this method.
}
// Retry mechanism, implemented wheresoever.
void manipulateData() {
while (true) {
try {
retryManipulateData();
} catch (StaleStateException e) {
continue; // Retry.
}
return;
}
}
// class MyEntityResource
#POST
// ...
// #UnitOfWork can also be used here if nested transactions are desired.
public Response someResourceMethod() {
// Call manipulateData() somehow.
}
Of course one could also attach the #UnitOfWork annotation rather on a method inside a service class which makes use of the DAOs instead of directly applying it to a DAO method. In whatever class the annotation is used, remember to create a proxy of the instances with the UnitOfWorkAwareProxyFactory as described in the pull request.

Java proxy for Autocloseable (Jedis resources)

I am trying to find out whether it is possible to create Java dynamic proxy to automatically close Autocloseable resources without having to remember of embedding such resources with try-resources block.
For example I have a JedisPool that has a getResource method which can be used like that:
try(Jedis jedis = jedisPool.getResource() {
// use jedis client
}
For now I did something like that:
class JedisProxy implements InvocationHandler {
private final JedisPool pool;
public JedisProxy(JedisPool pool) {
this.pool = pool;
}
public static JedisCommands newInstance(Pool<Jedis> pool) {
return (JedisCommands) java.lang.reflect.Proxy.newProxyInstance(
JedisCommands.class.getClassLoader(),
new Class[] { JedisCommands.class },
new JedisProxy(pool));
}
#Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
try (Jedis client = pool.getResource()) {
return method.invoke(client, args);
} catch (InvocationTargetException e) {
throw e.getTargetException();
} catch (Exception e) {
throw e;
}
}
}
Now each time when I call method on Jedis (JedisCommands) this method is passed to proxy which gets a new client from the pool, executes method and returns this resource to the pool.
It works fine, but when I want to execute multiple methods on client, then for each method resource is taken from pool and returned again (it might be time consuming). Do you have any idea how to improve that?
You would end up with your own "transaction manager" in which you normally would return the object to the pool immediately, but if you had started a "transaction" the object wouldn't be returned to the pool until you've "committed" the "transaction".
Suddenly your problem with using try-with-resources turns into an actual problem due to the use of a hand-crafted custom mechanism.
Using try with resources pros:
Language built-in feature
Allows you to attach a catch block, and the resources are still released
Simple, consistent syntax, so that even if a developer weren't familiar with it, he would see all the Jedis code surrounded by it and (hopefully) think "So this must be the correct way to use this"
Cons:
You need to remember to use it
Your suggestion pros (You can tell me if I forget anything):
Automatic closing even if the developer doesn't close the resource, preventing a resource leak
Cons:
Extra code always means extra places to find bugs in
If you don't create a "transaction" mechanism, you may suffer from a performance hit (I'm not familiar with [jr]edis or your project, so I can't say whether it's really an issue or not)
If you do create it, you'll have even more extra code which is prone to bugs
Syntax is no longer simple, and will be confusing to anyone coming to the project
Exception handling becomes more complicated
You'll be making all your proxy-calls through reflection (a minor issue, but hey, it's my list ;)
Possibly more, depending on what the final implementation will be
If you think I'm not making valid points, please tell me. Otherwise my assertion will remain "you have a 'solution' looking for a problem".
I don’t think that this is going into the right direction. After all, developers should get used to handle resources correctly and IDEs/compilers are able to issue warnings when autoclosable resources aren’t handled using try(…){}…
However, the task of creating a proxy for decorating all invocations and the addition of a way to decorate a batch of multiple action as a whole, is of a general nature, therefore, it has a general solution:
class JedisProxy implements InvocationHandler {
private final JedisPool pool;
public JedisProxy(JedisPool pool) {
this.pool = pool;
}
public static JedisCommands newInstance(Pool<Jedis> pool) {
return (JedisCommands) java.lang.reflect.Proxy.newProxyInstance(
JedisCommands.class.getClassLoader(),
new Class[] { JedisCommands.class },
new JedisProxy(pool));
}
#Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
try (Jedis client = pool.getResource()) {
return method.invoke(client, args);
} catch (InvocationTargetException e) {
throw e.getTargetException();
}
}
public static void executeBatch(JedisCommands c, Consumer<JedisCommands> action) {
InvocationHandler ih = Proxy.getInvocationHandler(c);
if(!(ih instanceof JedisProxy))
throw new IllegalArgumentException();
try(JedisCommands actual=((JedisProxy)ih).pool.getResource()) {
action.accept(actual);
}
}
public static <R> R executeBatch(JedisCommands c, Function<JedisCommands,R> action){
InvocationHandler ih = Proxy.getInvocationHandler(c);
if(!(ih instanceof JedisProxy))
throw new IllegalArgumentException();
try(JedisCommands actual=((JedisProxy)ih).pool.getResource()) {
return action.apply(actual);
}
}
}
Note that the type conversion of a Pool<Jedis> to a JedisPool looked suspicious to me but I didn’t change anything in that code as I don’t have these classes to verify it.
Now you can use it like
JedisCommands c=JedisProxy.newInstance(pool);
c.someAction();// aquire-someaction-close
JedisProxy.executeBatch(c, jedi-> {
jedi.someAction();
jedi.anotherAction();
}); // aquire-someaction-anotherAction-close
ResultType foo = JedisProxy.executeBatch(c, jedi-> {
jedi.someAction();
return jedi.someActionReturningValue(…);
}); // aquire-someaction-someActionReturningValue-close-return the value
The batch execution requires the instance to be a proxy, otherwise an exception is thrown as it’s clear that this method cannot guarantee a particular behavior for an unknown instance with an unknown life cycle.
Also, developers now have to be aware of the proxy and the batch execution feature just like they have to be aware of resources and the try(…){} statement when not using a proxy. On the other hand, if they aren’t, they lose performance when invoking multiple methods on a proxy without using the batch method, whereas they let resources leak when invoking multiple methods without try(…){}on an actual, non-proxy resource…

Is there a good way to check whether a Datastax Session.executeAsync() has thrown an exception?

I'm trying to speed up our code by calling session.executeAsync() instead of session.execute() for DB writes.
We have use cases where the DB connection might be down, currently the previous execute() throws an exception when the connection is lost (no hosts reachable in the cluster). We can catch these exceptions and retry or save the data somewhere else etc...
With executeAsync(), it doesn't look like there's any way to fulfill this use case - the returned ResultSetFuture object needs to be accessed to check the result, which would defeat the purpose of using the executeAsync() in the first place...
Is there any way to add a listener (or something similar) anywhere for the executeAsync() call that will asynchronously notify some other code that a DB write has failed?
Is this pertinent?
Datastax 1.0.2
Java 1.7.40
You could try something like this since the ResultSetFuture implements ListenableFuture from the Guava library:
ResultSetFuture resultSetFuture = session.executeAsync("SELECT * FROM test.t;");
Futures.addCallback(resultSetFuture, new FutureCallback<ResultSet>() {
#Override
public void onSuccess(#Nullable com.datastax.driver.core.ResultSet resultSet) {
// do nothing
}
#Override
public void onFailure(Throwable throwable) {
System.out.printf("Failed with: %s\n", throwable);
}
});
This approach will not block your application.
You could pass a callback to the method to take action on exception. If you need the ResultSetFuture, you could try something like this:
interface ResultSetFutureHandler {
void handle(ResultSetFuture rs);
}
public void catchException(ResultSetFutureHandler handler) {
ResultSetFuture resultSet = null;
try {
resultSet = getSession().executeAsync(query);
for (Row row : results.getUninterruptibly()) {
// do something
}
} catch (RuntimeException e) {
handler.handle(resultSet); // resultSet may or may not be null
}
}
Then call it like this:
catchException(new ResultSetFutureHandler() {
void handle(ResultSetFuture resultSet) {
// do something with the ResultSetFuture
}
});
If you need to know what the exception was, add an exception parameter too:
interface ResultSetFutureHandler {
void handle(ResultSetFuture rs, RuntimeException e);
}

How do I implement a DAO manager using JDBC and connection pools?

My problem is as follows. I need a class that works as a single point to a database connection in a web system, so to avoid having one user with two open connections. I need it to be as optimal as possible and it should manage every transaction in the system. In other words only that class should be able to instantiate DAOs. And to make it better, it should also use connection pooling! What should I do?
You will need to implement a DAO Manager. I took the main idea from this website, however I made my own implementation that solves some few issues.
Step 1: Connection pooling
First of all, you will have to configure a connection pool. A connection pool is, well, a pool of connections. When your application runs, the connection pool will start a certain amount of connections, this is done to avoid creating connections in runtime since it's a expensive operation. This guide is not meant to explain how to configure one, so go look around about that.
For the record, I'll use Java as my language and Glassfish as my server.
Step 2: Connect to the database
Let's start by creating a DAOManager class. Let's give it methods to open and close a connection in runtime. Nothing too fancy.
public class DAOManager {
public DAOManager() throws Exception {
try
{
InitialContext ctx = new InitialContext();
this.src = (DataSource)ctx.lookup("jndi/MYSQL"); //The string should be the same name you're giving to your JNDI in Glassfish.
}
catch(Exception e) { throw e; }
}
public void open() throws SQLException {
try
{
if(this.con==null || !this.con.isOpen())
this.con = src.getConnection();
}
catch(SQLException e) { throw e; }
}
public void close() throws SQLException {
try
{
if(this.con!=null && this.con.isOpen())
this.con.close();
}
catch(SQLException e) { throw e; }
}
//Private
private DataSource src;
private Connection con;
}
This isn't a very fancy class, but it'll be the basis of what we're going to do. So, doing this:
DAOManager mngr = new DAOManager();
mngr.open();
mngr.close();
should open and close your connection to the database in an object.
Step 3: Make it a single point!
What, now, if we did this?
DAOManager mngr1 = new DAOManager();
DAOManager mngr2 = new DAOManager();
mngr1.open();
mngr2.open();
Some might argue, "why in the world would you do this?". But then you never know what a programmer will do. Even then, the programmer might forger from closing a connection before opening a new one. Plus, this is a waste of resources for the application. Stop here if you actually want to have two or more open connections, this will be an implementation for one connection per user.
In order to make it a single point, we will have to convert this class into a singleton. A singleton is a design pattern that allows us to have one and only one instance of any given object. So, let's make it a singleton!
We must convert our public constructor into a private one. We must only give an instance to whoever calls it. The DAOManager then becomes a factory!
We must also add a new private class that will actually store a singleton.
Alongside all of this, we also need a getInstance() method that will give us a singleton instance we can call.
Let's see how it's implemented.
public class DAOManager {
public static DAOManager getInstance() {
return DAOManagerSingleton.INSTANCE;
}
public void open() throws SQLException {
try
{
if(this.con==null || !this.con.isOpen())
this.con = src.getConnection();
}
catch(SQLException e) { throw e; }
}
public void close() throws SQLException {
try
{
if(this.con!=null && this.con.isOpen())
this.con.close();
}
catch(SQLException e) { throw e; }
}
//Private
private DataSource src;
private Connection con;
private DAOManager() throws Exception {
try
{
InitialContext ctx = new InitialContext();
this.src = (DataSource)ctx.lookup("jndi/MYSQL");
}
catch(Exception e) { throw e; }
}
private static class DAOManagerSingleton {
public static final DAOManager INSTANCE;
static
{
DAOManager dm;
try
{
dm = new DAOManager();
}
catch(Exception e)
dm = null;
INSTANCE = dm;
}
}
}
When the application starts, whenever anyone needs a singleton the system will instantiate one DAOManager. Quite neat, we've created a single access point!
But singleton is an antipattern because reasons!
I know some people won't like singleton. However it solves the problem (and has solved mine) quite decently. This is just a way of implementing this solution, if you have other ways you're welcome to suggest so.
Step 4: But there's something wrong...
Yes, indeed there is. A singleton will create only ONE instance for the whole application! And this is wrong in many levels, especially if we have a web system where our application will be multithreaded! How do we solve this, then?
Java provides a class named ThreadLocal. A ThreadLocal variable will have one instance per thread. Hey, it solves our problem! See more about how it works, you will need to understand its purpose so we can continue.
Let's make our INSTANCE ThreadLocal then. Modify the class this way:
public class DAOManager {
public static DAOManager getInstance() {
return DAOManagerSingleton.INSTANCE.get();
}
public void open() throws SQLException {
try
{
if(this.con==null || !this.con.isOpen())
this.con = src.getConnection();
}
catch(SQLException e) { throw e; }
}
public void close() throws SQLException {
try
{
if(this.con!=null && this.con.isOpen())
this.con.close();
}
catch(SQLException e) { throw e; }
}
//Private
private DataSource src;
private Connection con;
private DAOManager() throws Exception {
try
{
InitialContext ctx = new InitialContext();
this.src = (DataSource)ctx.lookup("jndi/MYSQL");
}
catch(Exception e) { throw e; }
}
private static class DAOManagerSingleton {
public static final ThreadLocal<DAOManager> INSTANCE;
static
{
ThreadLocal<DAOManager> dm;
try
{
dm = new ThreadLocal<DAOManager>(){
#Override
protected DAOManager initialValue() {
try
{
return new DAOManager();
}
catch(Exception e)
{
return null;
}
}
};
}
catch(Exception e)
dm = null;
INSTANCE = dm;
}
}
}
I would seriously love to not do this
catch(Exception e)
{
return null;
}
but initialValue() can't throw an exception. Oh, initialValue() you mean? This method will tell us what value will the ThreadLocal variable hold. Basically we're initializing it. So, thanks to this we can now have one instance per thread.
Step 5: Create a DAO
A DAOManager is nothing without a DAO. So we should at least create a couple of them.
A DAO, short for "Data Access Object" is a design pattern that gives the responsibility of managing database operations to a class representing a certain table.
In order to use our DAOManager more efficiently, we will define a GenericDAO, which is an abstract DAO that will hold the common operations between all DAOs.
public abstract class GenericDAO<T> {
public abstract int count() throws SQLException;
//Protected
protected final String tableName;
protected Connection con;
protected GenericDAO(Connection con, String tableName) {
this.tableName = tableName;
this.con = con;
}
}
For now, that will be enough. Let's create some DAOs. Let's suppose we have two POJOs: First and Second, both with just a String field named data and its getters and setters.
public class FirstDAO extends GenericDAO<First> {
public FirstDAO(Connection con) {
super(con, TABLENAME);
}
#Override
public int count() throws SQLException {
String query = "SELECT COUNT(*) AS count FROM "+this.tableName;
PreparedStatement counter;
try
{
counter = this.con.PrepareStatement(query);
ResultSet res = counter.executeQuery();
res.next();
return res.getInt("count");
}
catch(SQLException e){ throw e; }
}
//Private
private final static String TABLENAME = "FIRST";
}
SecondDAO will have more or less the same structure, just changing TABLENAME to "SECOND".
Step 6: Making the manager a factory
DAOManager not only should serve the purpose of serving as a single connection point. Actually, DAOManager should answer this question:
Who is the one responsible of managing the connections to the database?
The individual DAOs shouldn't manage them, but DAOManager. We've answered partially the question, but now we shouldn't let anyone manage other connections to the database, not even the DAOs. But, the DAOs need a connection to the database! Who should provide it? DAOManager indeed! What we should do is making a factory method inside DAOManager. Not just that, but DAOManager will also hand them the current connection!
Factory is a design pattern that will allow us to create instances of a certain superclass, without knowing exactly what child class will be returned.
First, let's create an enum listing our tables.
public enum Table { FIRST, SECOND }
And now, the factory method inside DAOManager:
public GenericDAO getDAO(Table t) throws SQLException
{
try
{
if(this.con == null || this.con.isClosed()) //Let's ensure our connection is open
this.open();
}
catch(SQLException e){ throw e; }
switch(t)
{
case FIRST:
return new FirstDAO(this.con);
case SECOND:
return new SecondDAO(this.con);
default:
throw new SQLException("Trying to link to an unexistant table.");
}
}
Step 7: Putting everything together
We're good to go now. Try the following code:
DAOManager dao = DAOManager.getInstance();
FirstDAO fDao = (FirstDAO)dao.getDAO(Table.FIRST);
SecondDAO sDao = (SecondDAO)dao.getDAO(Table.SECOND);
System.out.println(fDao.count());
System.out.println(sDao.count());
dao.close();
Isn't it fancy and easy to read? Not just that, but when you call close(), you close every single connection the DAOs are using. But how?! Well, they're sharing the same connection, so it's just natural.
Step 8: Fine-tuning our class
We can do several things from here on. To ensure connections are closed and returned to the pool, do the following in DAOManager:
#Override
protected void finalize()
{
try{ this.close(); }
finally{ super.finalize(); }
}
You can also implement methods that encapsulate setAutoCommit(), commit() and rollback() from the Connection so you can have a better handling of your transactions. What I also did is, instead of just holding a Connection, DAOManager also holds a PreparedStatement and a ResultSet. So, when calling close() it also closes both. A fast way of closing statements and result sets!
I hope this guide can be of any use to you in your next project!
I think that if you want to do a simple DAO pattern in plain JDBC you should keep it simple:
public List<Customer> listCustomers() {
List<Customer> list = new ArrayList<>();
try (Connection conn = getConnection();
Statement s = conn.createStatement();
ResultSet rs = s.executeQuery("select * from customers")) {
while (rs.next()) {
list.add(processRow(rs));
}
return list;
} catch (SQLException e) {
throw new RuntimeException(e.getMessage(), e); //or your exceptions
}
}
You can follow this pattern in a class called for example CustomersDao or CustomerManager, and you can call it with a simple
CustomersDao dao = new CustomersDao();
List<Customers> customers = dao.listCustomers();
Note that I'm using try with resources and this code is safe to connections leaks, clean, and straightforward, You probably don't want to follow the full DAO pattern with Factorys, interfaces and all that plumbing that in many cases don't add real value.
I don't think that it's a good idea using ThreadLocals, Bad used like in the accepted answer is a source of classloader leaks
Remember ALWAYS close your resources (Statements, ResultSets, Connections) in a try finally block or using try with resources

Java: how to handle retries without copy-paste code?

I have multiple cases when I have to deal retrial for DB and networking operations. Everywhere I do it I have the following type of code:
for (int iteration = 1; ; iteration++) {
try {
data = doSomethingUseful(data);
break;
} catch (SomeException | AndAnotherException e) {
if (iteration == helper.getNumberOfRetries()) {
throw e;
} else {
errorReporter.reportError("Got following error for data = {}. Continue trying after delay...", data, e);
utilities.defaultDelayForIteration(iteration);
handleSpecificCase(data);
}
}
}
The issue is that this code pattern is copy-pasted all over my classes. Which is really bad. I can't figure out how to get rid of this for-break-catch copy-paste pattern, since I usually get different exception to handle, I want to log data I failed on (usually also different ways).
Is there a good way to avoid this copy-paste in Java 7?
Edit: I do use guice for dependency injection. I do have checked exceptions. There could be multiple variables instead of just one data and they are all of different type.
Edit2: AOP approach looks as the most promising for me.
Off-hand, I can think of two different approaches:
If the differences in exception handling can be expressed declaratively, you might use AOP to weave the exception handling code around your methods. Then, your business code could look like:
#Retry(times = 3, loglevel = LogLevel.INFO)
List<User> getActiveUsers() throws DatabaseException {
// talk to the database
}
The advantage is that it is really easy to add retry behaviour to a method, the disadvantage is the complexity of weaving the advice (which you only have to implement once. If you are using a dependency injection library, chances are it will offer method interception support).
The other approach is to use the command pattern:
abstract class Retrieable<I,O> {
private final LogLevel logLevel;
protected Retrieable(LogLevel loglevel) {
this.logLevel = loglevel;
}
protected abstract O call(I input);
// subclasses may override to perform custom logic.
protected void handle(RuntimeException e) {
// log the exception.
}
public O execute(I input) {
for (int iteration = 1; ; iteration++) {
try {
return call(input);
} catch (RuntimeException e) {
if (iteration == helper.getNumberOfRetries()) {
throw e;
} else {
handle();
utilities.defaultDelayForIteration(iteration);
}
}
}
}
}
The problem with the command pattern are the method arguments. You are restricted to a single parameter, and the generics are rather unwieldly for the caller. In addition, it won't work with checked exceptions. On the plus side, no fancy AOP stuff :-)
As already suggested, AOP and Java annotations is a good option. I would recommend to use a read-made mechanism from jcabi-aspects:
#RetryOnFailure(attempts = 2, delay = 10, verbose = false)
public String load(URL url) {
return url.openConnection().getContent();
}
Read also this blog post: http://www.yegor256.com/2014/08/15/retry-java-method-on-exception.html
I have implemented the RetryLogic class below which provides reusable retry logic and supports parameters because the code to be retried is in a delegate passed in.
/**
* Generic retry logic. Delegate must throw the specified exception type to trigger the retry logic.
*/
public class RetryLogic<T>
{
public static interface Delegate<T>
{
T call() throws Exception;
}
private int maxAttempts;
private int retryWaitSeconds;
#SuppressWarnings("rawtypes")
private Class retryExceptionType;
public RetryLogic(int maxAttempts, int retryWaitSeconds, #SuppressWarnings("rawtypes") Class retryExceptionType)
{
this.maxAttempts = maxAttempts;
this.retryWaitSeconds = retryWaitSeconds;
this.retryExceptionType = retryExceptionType;
}
public T getResult(Delegate<T> caller) throws Exception {
T result = null;
int remainingAttempts = maxAttempts;
do {
try {
result = caller.call();
} catch (Exception e){
if (e.getClass().equals(retryExceptionType))
{
if (--remainingAttempts == 0)
{
throw new Exception("Retries exausted.");
}
else
{
try {
Thread.sleep((1000*retryWaitSeconds));
} catch (InterruptedException ie) {
}
}
}
else
{
throw e;
}
}
} while (result == null && remainingAttempts > 0);
return result;
}
}
Below is a use example. The code to be retried is within the call method.
private MyResultType getDataWithRetry(final String parameter) throws Exception {
return new RetryLogic<MyResultType>(5, 15, Exception.class).getResult(new RetryLogic.Delegate<MyResultType> () {
public MyResultType call() throws Exception {
return dataLayer.getData(parameter);
}});
}
In case you want to retry only when a specific type of exception occurs (and fail on all other types of exceptions) the RetryLogic class supports an exception class parameter.
Make your doSomething implement an interface, e.g., Runable and create a method containing your code above with doSomething replaced with interface.run(data)
take a look at: this retry utility
this method should work for most use cases:
public static <T> T executeWithRetry(final Callable<T> what, final int nrImmediateRetries,
final int nrTotalRetries, final int retryWaitMillis, final int timeoutMillis)
you can eassily implement an aspect using this utility to do this with even less code.
Extending the approach discusssed already, how about something like this (no IDE on this netbook, so regard this as pseudocode...)
// generics left as an exercise for the reader...
public Object doWithRetry(Retryable r){
for (int iteration = 1; ; iteration++) {
try {
return r.doSomethingUseful(data);
} catch (Exception e) {
if (r.isRetryException(e)) {
if(r.tooManyRetries(i){
throw e;
}
} else {
r.handleOtherException(e);
}
}
}
One thing I would like to add. Most exceptions (99.999%) mean there is something very wrong with your code or environment that needs an admins attention. If your code can't connect to the database it's probably a misconfigured environment there is little point to retrying it just to find out it didn't work the 3rd, 4th, or 5th time either. If you're throwing an exception because the person didn't give a valid credit card number, retrying isn't going to magically fill in a credit card number.
The only situations that are remotely worth retrying is when a system is tremendously strained and things are timing out, but in this situation retry logic is probably going to cause more strain than less (3x for 3 retries on every transaction). But this is what systems do to back down demand (see the apollo lander mission story). When a system is asked to do more than it can it starts dropping jobs and timeouts are the signal the system is strained (or poorly written). You'd be in a far better situation if you just increased the capacity of your system (add more ram, bigger servers, more servers, better algorithms, scale it!).
The other situation would be if you're using optimistic locking and you can somehow recover and auto merge two versions of an object. While I have seen this before I'd caution this approach, but it could be done for simple objects that can be merged without conflicts 100% of the time.
Most exceptions logic should be catch at the appropriate level (very important), make sure your system is in a good consistent state (ie rollback transactions, close files, etc), log it, inform user it didn't work.
But I'll humor this idea and try to give a good framework (well because it's fun like crossword puzzle fun).
// client code - what you write a lot
public class SomeDao {
public SomeReturn saveObject( final SomeObject obj ) throws RetryException {
Retry<SomeReturn> retry = new Retry<SomeReturn>() {
public SomeReturn execute() throws Exception {
try {
// doSomething
return someReturn;
} catch( SomeExpectedBadExceptionNotWorthRetrying ex ) {
throw new NoRetryException( ex ); // optional exception block
}
}
}
return retry.run();
}
}
// framework - what you write once
public abstract class Retry<T> {
public static final int MAX_RETRIES = 3;
private int tries = 0;
public T execute() throws Exception;
public T run() throws RetryException {
try {
return execute();
} catch( NoRetryException ex ) {
throw ex;
} catch( Exception ex ) {
tries++;
if( MAX_RETRIES == tries ) {
throw new RetryException("Maximum retries exceeded", ex );
} else {
return run();
}
}
}
}

Categories