I guess, DAO is thread safe, does not use any class members.
So can it be used without any problem as a private field of a Servlet ? We need only one copy, and
multiple threads can access it simultaneously, so why bother creating a local variable, right?
"DAO" is just a general term for database abstraction classes. Whether they are threadsafe or not depends on the specific implementation.
This bad example could be called a DAO, but it would get you into trouble if multiple threads call the insert method at the same time.
class MyDAO {
private Connection connection = null;
public boolean insertSomething(Something o) throws Exception {
try {
connection = getConnection()
//do insert on connection.
} finally {
if (connection != null) {
connection.close();
}
}
}
}
So the answer is: if your DAO handles connections and transactions right, it should work.
Related
So, here is some background info: I'm currently working at a company providing SaaS and my work involves writing methods using JDBC to retrieve and process data on a database. Here is the problem, most of the methods comes with certain pattern to manage connection:
public Object someMethod(Object... parameters) throws MyCompanyException{
try{
Connection con = ConnectionPool.getConnection();
con.setAutoCommit(false);
// do something here
con.commit();
con.setAutoCommit(true);
}
catch(SomeException1 e){
con.rollback();
throw new MyCompanyException(e);
}
catch(SomeException2 e){
con.rollback();
throw new MyCompanyException(e);
}
// repeat until all exception are catched and handled
finally {
ConnectionPool.freeConnection(con);
}
// return something if the method is not void
}
It had been already taken as a company standard to do all methods like this, so that the method would rollback all changes it had made in case of any exception is caught, and the connection will also be freed asap. However, from time to time some of us may forget to do some certain routine things when coding, like releasing connection or rollback when error occurs, and such mistake is not quite easily detectable until our customers complaint about it. So I've decided to make these routine things be done automatically even it is not declared in the method. For connection initiation and set up, it can be done by using the constructor easily.
public abstract SomeAbstractClass {
protected Connection con;
public SomeAbstractClass() {
con = CoolectionPool.getConnection();
con.setAutoCommit(false);
}
}
But the real problem is to make connection to be released automatically immediately after finishing the method. I've considered using finalize() to do so, but this is not what I'm looking for as finalize() is called by GC and that means it might not finalize my object when the method is finished, and even when the object will never be referenced. finalize() is only called when JVM really run out of memory to go on.
Is there anyway to free my connection automatically and immediately when the method finishes its job?
Use "try with resources". It is a programming pattern such that you write a typical looking try - catch block, and if anything goes wrong or you exit it, the resources are closed.
try (Connection con = ConnectionPool.getConnection()) {
con.doStuff(...);
}
// at here Connection con is closed.
It works by Connection extending Closeable, and if any class within the "resource acquisition" portion of the try statement implements Closeable then the object's close() method will be called before control is passed out of the try / catch block.
This prevents the need to use finally { ... } for many scenarios, and is actually safer than most hand-written finally { ... } blocks as it also accommodates exceptions throw in the catch { ... } and finally { ... } blocks while still closing the resource.
One of the standard ways to do this is using AOP. You can look at Spring Framework on how it handles JDBC tansactions and connections and manages them using MethodInterceptor. My advice is to use Spring in your project and not reinvent the wheel.
The idea behind MethodInterceptor is that you add a code that creates and opens connection before JDBC related method is called, puts the connection into the thread local so that your method can get the connection to make SQL calls, and then closes it after the method is executed.
You could add a method to your ConnectionPool class for example:
public <T> T execute(Function<Connection, T> query,
T defaultValue,
Object... parameters) {
try {
Connection con = ConnectionPool.getConnection();
con.setAutoCommit(false);
Object result = query.apply(conn);
con.commit();
con.setAutoCommit(true);
return result;
} catch(SomeException1 e) {
con.rollback();
throw new MyCompanyException(e);
}
//etc.
finally {
ConnectionPool.freeConnection(con);
}
return defaultValue;
}
And you call it from the rest of your code with:
public Object someMethod(Object... parameters) throws MyCompanyException {
return ConnectionPool.execute(
con -> { ... }, //use the connection and return something
null, //default value
parameters
);
}
I am currently implementing a REST API web service using the Dropwizard framework together with dropwizard-hibernate respectively JPA/Hibernate (using a PostgreSQL database).
I have a method inside a resource which I annotated with #UnitOfWork to get one transaction for the whole request.
The resource method calls a method of one of my DAOs which extends AbstractDAO<MyEntity> and is used to communicate retrieval or modification of my entities (of type MyEntity) with the database.
This DAO method does the following: First it selects an entity instance and therefore a row from the database. Afterwards, the entity instance is inspected and based on its properties, some of its properties can be altered. In this case, the row in the database should be updated.
I didn't specify anything else regarding caching, locking or transactions anywhere, so I assume the default is some kind of optimistic locking mechanism enforced by Hibernate.
Therefore (I think), when deleting the entity instance in another thread after selecting it from the database in the current one, a StaleStateException is thrown when trying to commit the transaction because the entity instance which should be updated has been deleted before by the other thread.
When using the #UnitOfWork annotation, my understanding is that I'm not able to catch this exception, neither in the DAO method nor in the resource method.
I could now implement an ExceptionMapper<StaleStateException> for Jersey to deliver a HTTP 503 response with a Retry-After header or something like that to the client to tell it to retry its request.
But I'd rather first like to retry to request/transaction (which is basically the same here because of the #UnitOfWork annotation) while still on the server.
Is there any example implementation for a server-sided transaction retry mechanism when using Dropwizard? Like retrying a configurable amount of times (e.g. 3) and then failing with an exception/HTTP 503 response.
How would you implement this? First thing that came to my mind is another annotation like #Retry(exception = StaleStateException.class, count = 3) which I could add to my resource.
Any suggestions on this?
Or is there an alternative solution to my problem considering different locking/transaction-related things?
Alternative approach to this is to use an injection framework - in my case guice - and use method interceptors for this. This is a more generic solution.
DW integreates with guice very smoothly through https://github.com/xvik/dropwizard-guicey
I have a generic implementation that can retry any exception. It works, as yours, on an annotation, as follows:
#Target({ElementType.TYPE, ElementType.METHOD})
#Retention(RetentionPolicy.RUNTIME)
public #interface Retry {
}
The interceptor then does (with docs):
/**
* Abstract interceptor to catch exceptions and retry the method automatically.
* Things to note:
*
* 1. Method must be idempotent (you can invoke it x times without alterint the result)
* 2. Method MUST re-open a connection to the DB if that is what is retried. Connections are in an undefined state after a rollback/deadlock.
* You can try and reuse them, however the result will likely not be what you expected
* 3. Implement the retry logic inteligently. You may need to unpack the exception to get to the original.
*
* #author artur
*
*/
public abstract class RetryInterceptor implements MethodInterceptor {
private static final Logger log = Logger.getLogger(RetryInterceptor.class);
#Override
public Object invoke(MethodInvocation invocation) throws Throwable {
if(invocation.getMethod().isAnnotationPresent(Retry.class)) {
int retryCount = 0;
boolean retry = true;
while(retry && retryCount < maxRetries()) {
try {
return invocation.proceed();
} catch(Exception e) {
log.warn("Exception occured while trying to executed method", e);
if(!retry(e)) {
retry = false;
} {
retryCount++;
}
}
}
}
throw new IllegalStateException("All retries if invocation failed");
}
protected boolean retry(Exception e) {
return false;
}
protected int maxRetries() {
return 0;
}
}
A few things to note about this approach.
The retried method must be designed to be invoked multiple times without any result altering (e.g. if the method stores temporary results in forms of increments, then executing twice might increment twice)
Database exceptions are generally not save for retry. They must open a new connection (in particular when retrying deadlocks which is my case)
Other than that this base implementation simply catches anything and then delegates the retry count and detection to the implementing class. For example, my specific deadlock retry interceptor:
public class DeadlockRetryInterceptor extends RetryInterceptor {
private static final Logger log = Logger.getLogger(MsRetryInterceptor.class);
#Override
protected int maxRetries() {
return 6;
}
#Override
protected boolean retry(Exception e) {
SQLException ex = unpack(e);
if(ex == null) {
return false;
}
int errorCode = ex.getErrorCode();
log.info("Found exception: " + ex.getClass().getSimpleName() + " With error code: " + errorCode, ex);
return errorCode == 1205;
}
private SQLException unpack(final Throwable t) {
if(t == null) {
return null;
}
if(t instanceof SQLException) {
return (SQLException) t;
}
return unpack(t.getCause());
}
}
And finally, i can bind this to guice by doing:
bindInterceptor(Matchers.any(), Matchers.annotatedWith(Retry.class), new MsRetryInterceptor());
Which checks any class, and any method annotated with retry.
An example method for retry would be:
#Override
#Retry
public List<MyObject> getSomething(int count, String property) {
try(Connection con = datasource.getConnection();
Context c = metrics.timer(TIMER_NAME).time())
{
// do some work
// return some stuff
} catch (SQLException e) {
// catches exception and throws it out
throw new RuntimeException("Some more specific thing",e);
}
}
The reason I need an unpack is that old legacy cases, like this DAO impl, already catch their own exceptions.
Note also how the method (a get) retrieves a new connection when invoked twice from my datasource pool, and how no modifications are done inside it (hence: safe to retry)
I hope that helps.
You can do similar things by implementing ApplicationListeners or RequestFilters or similar, however I think this is a more generic approach that could retry any kind of failure on any method that is guice bound.
Also note that guice can only intercept methods when it constructs the class (inject annotated constructor etc.)
Hope that helps,
Artur
I found a pull request in the Dropwizard repository that helped me. It basically enables the possibility of using the #UnitOfWork annotation on other than resource methods.
Using this, I was able to detach the session opening/closing and transaction creation/committing lifecycle from the resource method by moving the #UnitOfWork annotation from the resource method to the DAO method which is responsible for the data manipulation which causes the StaleStateException.
Then I was able to build a retry mechanism around this DAO method.
Examplary explanation:
// class MyEntityDAO extends AbstractDAO<MyEntity>
#UnitOfWork
void tryManipulateData() {
// Due to optimistic locking, this operations cause a StaleStateException when
// committed "by the #UnitOfWork annotation" after returning from this method.
}
// Retry mechanism, implemented wheresoever.
void manipulateData() {
while (true) {
try {
retryManipulateData();
} catch (StaleStateException e) {
continue; // Retry.
}
return;
}
}
// class MyEntityResource
#POST
// ...
// #UnitOfWork can also be used here if nested transactions are desired.
public Response someResourceMethod() {
// Call manipulateData() somehow.
}
Of course one could also attach the #UnitOfWork annotation rather on a method inside a service class which makes use of the DAOs instead of directly applying it to a DAO method. In whatever class the annotation is used, remember to create a proxy of the instances with the UnitOfWorkAwareProxyFactory as described in the pull request.
I am trying to find out whether it is possible to create Java dynamic proxy to automatically close Autocloseable resources without having to remember of embedding such resources with try-resources block.
For example I have a JedisPool that has a getResource method which can be used like that:
try(Jedis jedis = jedisPool.getResource() {
// use jedis client
}
For now I did something like that:
class JedisProxy implements InvocationHandler {
private final JedisPool pool;
public JedisProxy(JedisPool pool) {
this.pool = pool;
}
public static JedisCommands newInstance(Pool<Jedis> pool) {
return (JedisCommands) java.lang.reflect.Proxy.newProxyInstance(
JedisCommands.class.getClassLoader(),
new Class[] { JedisCommands.class },
new JedisProxy(pool));
}
#Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
try (Jedis client = pool.getResource()) {
return method.invoke(client, args);
} catch (InvocationTargetException e) {
throw e.getTargetException();
} catch (Exception e) {
throw e;
}
}
}
Now each time when I call method on Jedis (JedisCommands) this method is passed to proxy which gets a new client from the pool, executes method and returns this resource to the pool.
It works fine, but when I want to execute multiple methods on client, then for each method resource is taken from pool and returned again (it might be time consuming). Do you have any idea how to improve that?
You would end up with your own "transaction manager" in which you normally would return the object to the pool immediately, but if you had started a "transaction" the object wouldn't be returned to the pool until you've "committed" the "transaction".
Suddenly your problem with using try-with-resources turns into an actual problem due to the use of a hand-crafted custom mechanism.
Using try with resources pros:
Language built-in feature
Allows you to attach a catch block, and the resources are still released
Simple, consistent syntax, so that even if a developer weren't familiar with it, he would see all the Jedis code surrounded by it and (hopefully) think "So this must be the correct way to use this"
Cons:
You need to remember to use it
Your suggestion pros (You can tell me if I forget anything):
Automatic closing even if the developer doesn't close the resource, preventing a resource leak
Cons:
Extra code always means extra places to find bugs in
If you don't create a "transaction" mechanism, you may suffer from a performance hit (I'm not familiar with [jr]edis or your project, so I can't say whether it's really an issue or not)
If you do create it, you'll have even more extra code which is prone to bugs
Syntax is no longer simple, and will be confusing to anyone coming to the project
Exception handling becomes more complicated
You'll be making all your proxy-calls through reflection (a minor issue, but hey, it's my list ;)
Possibly more, depending on what the final implementation will be
If you think I'm not making valid points, please tell me. Otherwise my assertion will remain "you have a 'solution' looking for a problem".
I don’t think that this is going into the right direction. After all, developers should get used to handle resources correctly and IDEs/compilers are able to issue warnings when autoclosable resources aren’t handled using try(…){}…
However, the task of creating a proxy for decorating all invocations and the addition of a way to decorate a batch of multiple action as a whole, is of a general nature, therefore, it has a general solution:
class JedisProxy implements InvocationHandler {
private final JedisPool pool;
public JedisProxy(JedisPool pool) {
this.pool = pool;
}
public static JedisCommands newInstance(Pool<Jedis> pool) {
return (JedisCommands) java.lang.reflect.Proxy.newProxyInstance(
JedisCommands.class.getClassLoader(),
new Class[] { JedisCommands.class },
new JedisProxy(pool));
}
#Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
try (Jedis client = pool.getResource()) {
return method.invoke(client, args);
} catch (InvocationTargetException e) {
throw e.getTargetException();
}
}
public static void executeBatch(JedisCommands c, Consumer<JedisCommands> action) {
InvocationHandler ih = Proxy.getInvocationHandler(c);
if(!(ih instanceof JedisProxy))
throw new IllegalArgumentException();
try(JedisCommands actual=((JedisProxy)ih).pool.getResource()) {
action.accept(actual);
}
}
public static <R> R executeBatch(JedisCommands c, Function<JedisCommands,R> action){
InvocationHandler ih = Proxy.getInvocationHandler(c);
if(!(ih instanceof JedisProxy))
throw new IllegalArgumentException();
try(JedisCommands actual=((JedisProxy)ih).pool.getResource()) {
return action.apply(actual);
}
}
}
Note that the type conversion of a Pool<Jedis> to a JedisPool looked suspicious to me but I didn’t change anything in that code as I don’t have these classes to verify it.
Now you can use it like
JedisCommands c=JedisProxy.newInstance(pool);
c.someAction();// aquire-someaction-close
JedisProxy.executeBatch(c, jedi-> {
jedi.someAction();
jedi.anotherAction();
}); // aquire-someaction-anotherAction-close
ResultType foo = JedisProxy.executeBatch(c, jedi-> {
jedi.someAction();
return jedi.someActionReturningValue(…);
}); // aquire-someaction-someActionReturningValue-close-return the value
The batch execution requires the instance to be a proxy, otherwise an exception is thrown as it’s clear that this method cannot guarantee a particular behavior for an unknown instance with an unknown life cycle.
Also, developers now have to be aware of the proxy and the batch execution feature just like they have to be aware of resources and the try(…){} statement when not using a proxy. On the other hand, if they aren’t, they lose performance when invoking multiple methods on a proxy without using the batch method, whereas they let resources leak when invoking multiple methods without try(…){}on an actual, non-proxy resource…
I have a variable - "protected static Context jndi;" in my class where "Context" is an interface . When i try to access it in the below mentioned method, it generates the sonar violation mentioned in the title
public JMSQueueResource createQueueResource(String queueBindingName, String qcfBindingName, boolean messagePersisted, boolean autoAcknowledge, boolean nonJMS) throws JMSException, NamingException {
JMSQueueResource qResource = new JMSQueueResource();
try {
jndi = createInitialContext();
if (queueConnectionFactory == null) {
queueConnectionFactory = (QueueConnectionFactory) lookup(jndi, qcfBindingName);
}
qResource.theQueueConnection = queueConnectionFactory.createQueueConnection();
if (autoAcknowledge) {
qResource.theQueueSession = qResource.theQueueConnection.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
}
else {
qResource.theQueueSession = qResource.theQueueConnection.createQueueSession(false, Session.CLIENT_ACKNOWLEDGE);
}
Queue queue = (Queue) lookup(jndi, queueBindingName);
//if (nonJMS && queue instanceof com.ibm.mq.jms.MQQueue) {
// com.ibm.mq.jms.MQQueue q = (com.ibm.mq.jms.MQQueue) queue;
// q.setTargetClient(JMSC.MQJMS_CLIENT_NONJMS_MQ);
//}
qResource.theQueueSender = qResource.theQueueSession.createSender(queue);
if (messagePersisted) {
qResource.theQueueSender.setDeliveryMode(DeliveryMode.PERSISTENT);
}
else {
qResource.theQueueSender.setDeliveryMode(DeliveryMode.NON_PERSISTENT);
}
qResource.theQueueConnection.start();
}
catch (JMSException jmse) {
throw jmse;
}
catch (NamingException ne) {
throw ne;
}
finally {
if(jndi != null){
jndi.close();
}
}
return qResource;
}
I could see there are suggestions like to use an Atomic Integer wrapper. What is the best fix for this problem?
The sonar violation is a valid one as mutating a static variable from an instance method can lead to some pretty messed up behavior like:
How can you ensure that the field is initialized by an instance method before a static read access?
What happens when multiple threads access the field, directly or through the createQueueResource method?
Regarding the Java documentation, making it static and potentially accessed by multiple thread is a bad idea:
An InitialContext instance is not synchronized against concurrent
access by multiple threads. Multiple threads each manipulating a
different InitialContext instance need not synchronize. Threads that
need to access a single InitialContext instance concurrently should
synchronize amongst themselves and provide the necessary locking.
Having a local variable as suggested seems like a reasonable first way to avoid the warning and the related problems.
Whether the construction of the context is expensive depends also on the factory that is used to provide it.
First you need to worry about the correctness of the program, then you can optimize when you can test where the real bottlenecks are.
EDIT:
This link should provide more insight into the Spring application context and how to leverage the dependency injection of the Spring container to make use of the Context instead of storing it in a variable in a class https://spring.io/understanding/application-context
I work on an multithreading Java application, it is a web server that provide REST services, about 1000 requests per second. I have a relational database, and I use hibernate for accessing it. The database has about 300-400 request per second. I am wondering if DAO pattern is correct, from the multi threading perspective.
So, there is one BaseModel class that looks like this:
public class BaseModelDAO
{
protected Session session;
protected final void commit() {
session.getTransaction().commit();
}
protected final void openSession() {
session = HibernateUtil.getSessionFactory().openSession();
session.beginTransaction();
}
}
Then I have a DAO class for every table from database:
public class ClientDAOHibernate extends BaseModelDAO implements ClientDAO
{
private Logger log = Logger.getLogger(this.getClass());
#Override
public synchronized void addClient(Client client) throws Exception {
try {
openSession();
session.save(client);
commit();
log.debug("client successfully added into database");
} catch (Exception e) {
log.error("error adding new client into database");
throw new Exception("couldn't add client into database");
} finally {
session.close();
}
}
#Override
public synchronized Client getClient(String username, String password) throws Exception {
Client client = null;
try {
openSession();
client = (Client) session.createCriteria(Client.class).createAlias("user", "UserAlias").add(Restrictions.eq("UserAlias.username", username)).add(Restrictions.eq("UserAlias.password", password)).uniqueResult();
commit();
} catch (Exception e) {
log.error("error updating user into database");
throw new DBUsersGetUserException();
} finally {
session.close();
}
return client;
}
}
Here are my questions:
It is ok to open and close the session for every access to db, taking in consideration the number of concurrent requests?
Now DAO classes are accessed directly from application business logic. Should be used a DAO manager insted? If yes, what should be a good design to implement it?
No, your implementation is not a good one:
transactions should be around business logic, not around data access logic: if you want to transfer money from one account to another, you can't have a transaction for the debit operation, and another transaction for the credit operation. The transaction must cover the whole use-case.
by synchronizing every method of the DAO, you forbid two requests to get a client at the same time. You should not have a session field in your DAO. The session should be a local variable of each method. By doing this, your DAO would become stateless, and thus inherently thread-safe, without any need for synchronization
As Michael says in his comment, using programmatic transactions makes the code verbose, complex, and not focused to the business use-case. Use EJBs or Spring to enjoy declarative transaction management and exception handling.