I work on an multithreading Java application, it is a web server that provide REST services, about 1000 requests per second. I have a relational database, and I use hibernate for accessing it. The database has about 300-400 request per second. I am wondering if DAO pattern is correct, from the multi threading perspective.
So, there is one BaseModel class that looks like this:
public class BaseModelDAO
{
protected Session session;
protected final void commit() {
session.getTransaction().commit();
}
protected final void openSession() {
session = HibernateUtil.getSessionFactory().openSession();
session.beginTransaction();
}
}
Then I have a DAO class for every table from database:
public class ClientDAOHibernate extends BaseModelDAO implements ClientDAO
{
private Logger log = Logger.getLogger(this.getClass());
#Override
public synchronized void addClient(Client client) throws Exception {
try {
openSession();
session.save(client);
commit();
log.debug("client successfully added into database");
} catch (Exception e) {
log.error("error adding new client into database");
throw new Exception("couldn't add client into database");
} finally {
session.close();
}
}
#Override
public synchronized Client getClient(String username, String password) throws Exception {
Client client = null;
try {
openSession();
client = (Client) session.createCriteria(Client.class).createAlias("user", "UserAlias").add(Restrictions.eq("UserAlias.username", username)).add(Restrictions.eq("UserAlias.password", password)).uniqueResult();
commit();
} catch (Exception e) {
log.error("error updating user into database");
throw new DBUsersGetUserException();
} finally {
session.close();
}
return client;
}
}
Here are my questions:
It is ok to open and close the session for every access to db, taking in consideration the number of concurrent requests?
Now DAO classes are accessed directly from application business logic. Should be used a DAO manager insted? If yes, what should be a good design to implement it?
No, your implementation is not a good one:
transactions should be around business logic, not around data access logic: if you want to transfer money from one account to another, you can't have a transaction for the debit operation, and another transaction for the credit operation. The transaction must cover the whole use-case.
by synchronizing every method of the DAO, you forbid two requests to get a client at the same time. You should not have a session field in your DAO. The session should be a local variable of each method. By doing this, your DAO would become stateless, and thus inherently thread-safe, without any need for synchronization
As Michael says in his comment, using programmatic transactions makes the code verbose, complex, and not focused to the business use-case. Use EJBs or Spring to enjoy declarative transaction management and exception handling.
Related
I am currently implementing a REST API web service using the Dropwizard framework together with dropwizard-hibernate respectively JPA/Hibernate (using a PostgreSQL database).
I have a method inside a resource which I annotated with #UnitOfWork to get one transaction for the whole request.
The resource method calls a method of one of my DAOs which extends AbstractDAO<MyEntity> and is used to communicate retrieval or modification of my entities (of type MyEntity) with the database.
This DAO method does the following: First it selects an entity instance and therefore a row from the database. Afterwards, the entity instance is inspected and based on its properties, some of its properties can be altered. In this case, the row in the database should be updated.
I didn't specify anything else regarding caching, locking or transactions anywhere, so I assume the default is some kind of optimistic locking mechanism enforced by Hibernate.
Therefore (I think), when deleting the entity instance in another thread after selecting it from the database in the current one, a StaleStateException is thrown when trying to commit the transaction because the entity instance which should be updated has been deleted before by the other thread.
When using the #UnitOfWork annotation, my understanding is that I'm not able to catch this exception, neither in the DAO method nor in the resource method.
I could now implement an ExceptionMapper<StaleStateException> for Jersey to deliver a HTTP 503 response with a Retry-After header or something like that to the client to tell it to retry its request.
But I'd rather first like to retry to request/transaction (which is basically the same here because of the #UnitOfWork annotation) while still on the server.
Is there any example implementation for a server-sided transaction retry mechanism when using Dropwizard? Like retrying a configurable amount of times (e.g. 3) and then failing with an exception/HTTP 503 response.
How would you implement this? First thing that came to my mind is another annotation like #Retry(exception = StaleStateException.class, count = 3) which I could add to my resource.
Any suggestions on this?
Or is there an alternative solution to my problem considering different locking/transaction-related things?
Alternative approach to this is to use an injection framework - in my case guice - and use method interceptors for this. This is a more generic solution.
DW integreates with guice very smoothly through https://github.com/xvik/dropwizard-guicey
I have a generic implementation that can retry any exception. It works, as yours, on an annotation, as follows:
#Target({ElementType.TYPE, ElementType.METHOD})
#Retention(RetentionPolicy.RUNTIME)
public #interface Retry {
}
The interceptor then does (with docs):
/**
* Abstract interceptor to catch exceptions and retry the method automatically.
* Things to note:
*
* 1. Method must be idempotent (you can invoke it x times without alterint the result)
* 2. Method MUST re-open a connection to the DB if that is what is retried. Connections are in an undefined state after a rollback/deadlock.
* You can try and reuse them, however the result will likely not be what you expected
* 3. Implement the retry logic inteligently. You may need to unpack the exception to get to the original.
*
* #author artur
*
*/
public abstract class RetryInterceptor implements MethodInterceptor {
private static final Logger log = Logger.getLogger(RetryInterceptor.class);
#Override
public Object invoke(MethodInvocation invocation) throws Throwable {
if(invocation.getMethod().isAnnotationPresent(Retry.class)) {
int retryCount = 0;
boolean retry = true;
while(retry && retryCount < maxRetries()) {
try {
return invocation.proceed();
} catch(Exception e) {
log.warn("Exception occured while trying to executed method", e);
if(!retry(e)) {
retry = false;
} {
retryCount++;
}
}
}
}
throw new IllegalStateException("All retries if invocation failed");
}
protected boolean retry(Exception e) {
return false;
}
protected int maxRetries() {
return 0;
}
}
A few things to note about this approach.
The retried method must be designed to be invoked multiple times without any result altering (e.g. if the method stores temporary results in forms of increments, then executing twice might increment twice)
Database exceptions are generally not save for retry. They must open a new connection (in particular when retrying deadlocks which is my case)
Other than that this base implementation simply catches anything and then delegates the retry count and detection to the implementing class. For example, my specific deadlock retry interceptor:
public class DeadlockRetryInterceptor extends RetryInterceptor {
private static final Logger log = Logger.getLogger(MsRetryInterceptor.class);
#Override
protected int maxRetries() {
return 6;
}
#Override
protected boolean retry(Exception e) {
SQLException ex = unpack(e);
if(ex == null) {
return false;
}
int errorCode = ex.getErrorCode();
log.info("Found exception: " + ex.getClass().getSimpleName() + " With error code: " + errorCode, ex);
return errorCode == 1205;
}
private SQLException unpack(final Throwable t) {
if(t == null) {
return null;
}
if(t instanceof SQLException) {
return (SQLException) t;
}
return unpack(t.getCause());
}
}
And finally, i can bind this to guice by doing:
bindInterceptor(Matchers.any(), Matchers.annotatedWith(Retry.class), new MsRetryInterceptor());
Which checks any class, and any method annotated with retry.
An example method for retry would be:
#Override
#Retry
public List<MyObject> getSomething(int count, String property) {
try(Connection con = datasource.getConnection();
Context c = metrics.timer(TIMER_NAME).time())
{
// do some work
// return some stuff
} catch (SQLException e) {
// catches exception and throws it out
throw new RuntimeException("Some more specific thing",e);
}
}
The reason I need an unpack is that old legacy cases, like this DAO impl, already catch their own exceptions.
Note also how the method (a get) retrieves a new connection when invoked twice from my datasource pool, and how no modifications are done inside it (hence: safe to retry)
I hope that helps.
You can do similar things by implementing ApplicationListeners or RequestFilters or similar, however I think this is a more generic approach that could retry any kind of failure on any method that is guice bound.
Also note that guice can only intercept methods when it constructs the class (inject annotated constructor etc.)
Hope that helps,
Artur
I found a pull request in the Dropwizard repository that helped me. It basically enables the possibility of using the #UnitOfWork annotation on other than resource methods.
Using this, I was able to detach the session opening/closing and transaction creation/committing lifecycle from the resource method by moving the #UnitOfWork annotation from the resource method to the DAO method which is responsible for the data manipulation which causes the StaleStateException.
Then I was able to build a retry mechanism around this DAO method.
Examplary explanation:
// class MyEntityDAO extends AbstractDAO<MyEntity>
#UnitOfWork
void tryManipulateData() {
// Due to optimistic locking, this operations cause a StaleStateException when
// committed "by the #UnitOfWork annotation" after returning from this method.
}
// Retry mechanism, implemented wheresoever.
void manipulateData() {
while (true) {
try {
retryManipulateData();
} catch (StaleStateException e) {
continue; // Retry.
}
return;
}
}
// class MyEntityResource
#POST
// ...
// #UnitOfWork can also be used here if nested transactions are desired.
public Response someResourceMethod() {
// Call manipulateData() somehow.
}
Of course one could also attach the #UnitOfWork annotation rather on a method inside a service class which makes use of the DAOs instead of directly applying it to a DAO method. In whatever class the annotation is used, remember to create a proxy of the instances with the UnitOfWorkAwareProxyFactory as described in the pull request.
My problem is as follows. I need a class that works as a single point to a database connection in a web system, so to avoid having one user with two open connections. I need it to be as optimal as possible and it should manage every transaction in the system. In other words only that class should be able to instantiate DAOs. And to make it better, it should also use connection pooling! What should I do?
You will need to implement a DAO Manager. I took the main idea from this website, however I made my own implementation that solves some few issues.
Step 1: Connection pooling
First of all, you will have to configure a connection pool. A connection pool is, well, a pool of connections. When your application runs, the connection pool will start a certain amount of connections, this is done to avoid creating connections in runtime since it's a expensive operation. This guide is not meant to explain how to configure one, so go look around about that.
For the record, I'll use Java as my language and Glassfish as my server.
Step 2: Connect to the database
Let's start by creating a DAOManager class. Let's give it methods to open and close a connection in runtime. Nothing too fancy.
public class DAOManager {
public DAOManager() throws Exception {
try
{
InitialContext ctx = new InitialContext();
this.src = (DataSource)ctx.lookup("jndi/MYSQL"); //The string should be the same name you're giving to your JNDI in Glassfish.
}
catch(Exception e) { throw e; }
}
public void open() throws SQLException {
try
{
if(this.con==null || !this.con.isOpen())
this.con = src.getConnection();
}
catch(SQLException e) { throw e; }
}
public void close() throws SQLException {
try
{
if(this.con!=null && this.con.isOpen())
this.con.close();
}
catch(SQLException e) { throw e; }
}
//Private
private DataSource src;
private Connection con;
}
This isn't a very fancy class, but it'll be the basis of what we're going to do. So, doing this:
DAOManager mngr = new DAOManager();
mngr.open();
mngr.close();
should open and close your connection to the database in an object.
Step 3: Make it a single point!
What, now, if we did this?
DAOManager mngr1 = new DAOManager();
DAOManager mngr2 = new DAOManager();
mngr1.open();
mngr2.open();
Some might argue, "why in the world would you do this?". But then you never know what a programmer will do. Even then, the programmer might forger from closing a connection before opening a new one. Plus, this is a waste of resources for the application. Stop here if you actually want to have two or more open connections, this will be an implementation for one connection per user.
In order to make it a single point, we will have to convert this class into a singleton. A singleton is a design pattern that allows us to have one and only one instance of any given object. So, let's make it a singleton!
We must convert our public constructor into a private one. We must only give an instance to whoever calls it. The DAOManager then becomes a factory!
We must also add a new private class that will actually store a singleton.
Alongside all of this, we also need a getInstance() method that will give us a singleton instance we can call.
Let's see how it's implemented.
public class DAOManager {
public static DAOManager getInstance() {
return DAOManagerSingleton.INSTANCE;
}
public void open() throws SQLException {
try
{
if(this.con==null || !this.con.isOpen())
this.con = src.getConnection();
}
catch(SQLException e) { throw e; }
}
public void close() throws SQLException {
try
{
if(this.con!=null && this.con.isOpen())
this.con.close();
}
catch(SQLException e) { throw e; }
}
//Private
private DataSource src;
private Connection con;
private DAOManager() throws Exception {
try
{
InitialContext ctx = new InitialContext();
this.src = (DataSource)ctx.lookup("jndi/MYSQL");
}
catch(Exception e) { throw e; }
}
private static class DAOManagerSingleton {
public static final DAOManager INSTANCE;
static
{
DAOManager dm;
try
{
dm = new DAOManager();
}
catch(Exception e)
dm = null;
INSTANCE = dm;
}
}
}
When the application starts, whenever anyone needs a singleton the system will instantiate one DAOManager. Quite neat, we've created a single access point!
But singleton is an antipattern because reasons!
I know some people won't like singleton. However it solves the problem (and has solved mine) quite decently. This is just a way of implementing this solution, if you have other ways you're welcome to suggest so.
Step 4: But there's something wrong...
Yes, indeed there is. A singleton will create only ONE instance for the whole application! And this is wrong in many levels, especially if we have a web system where our application will be multithreaded! How do we solve this, then?
Java provides a class named ThreadLocal. A ThreadLocal variable will have one instance per thread. Hey, it solves our problem! See more about how it works, you will need to understand its purpose so we can continue.
Let's make our INSTANCE ThreadLocal then. Modify the class this way:
public class DAOManager {
public static DAOManager getInstance() {
return DAOManagerSingleton.INSTANCE.get();
}
public void open() throws SQLException {
try
{
if(this.con==null || !this.con.isOpen())
this.con = src.getConnection();
}
catch(SQLException e) { throw e; }
}
public void close() throws SQLException {
try
{
if(this.con!=null && this.con.isOpen())
this.con.close();
}
catch(SQLException e) { throw e; }
}
//Private
private DataSource src;
private Connection con;
private DAOManager() throws Exception {
try
{
InitialContext ctx = new InitialContext();
this.src = (DataSource)ctx.lookup("jndi/MYSQL");
}
catch(Exception e) { throw e; }
}
private static class DAOManagerSingleton {
public static final ThreadLocal<DAOManager> INSTANCE;
static
{
ThreadLocal<DAOManager> dm;
try
{
dm = new ThreadLocal<DAOManager>(){
#Override
protected DAOManager initialValue() {
try
{
return new DAOManager();
}
catch(Exception e)
{
return null;
}
}
};
}
catch(Exception e)
dm = null;
INSTANCE = dm;
}
}
}
I would seriously love to not do this
catch(Exception e)
{
return null;
}
but initialValue() can't throw an exception. Oh, initialValue() you mean? This method will tell us what value will the ThreadLocal variable hold. Basically we're initializing it. So, thanks to this we can now have one instance per thread.
Step 5: Create a DAO
A DAOManager is nothing without a DAO. So we should at least create a couple of them.
A DAO, short for "Data Access Object" is a design pattern that gives the responsibility of managing database operations to a class representing a certain table.
In order to use our DAOManager more efficiently, we will define a GenericDAO, which is an abstract DAO that will hold the common operations between all DAOs.
public abstract class GenericDAO<T> {
public abstract int count() throws SQLException;
//Protected
protected final String tableName;
protected Connection con;
protected GenericDAO(Connection con, String tableName) {
this.tableName = tableName;
this.con = con;
}
}
For now, that will be enough. Let's create some DAOs. Let's suppose we have two POJOs: First and Second, both with just a String field named data and its getters and setters.
public class FirstDAO extends GenericDAO<First> {
public FirstDAO(Connection con) {
super(con, TABLENAME);
}
#Override
public int count() throws SQLException {
String query = "SELECT COUNT(*) AS count FROM "+this.tableName;
PreparedStatement counter;
try
{
counter = this.con.PrepareStatement(query);
ResultSet res = counter.executeQuery();
res.next();
return res.getInt("count");
}
catch(SQLException e){ throw e; }
}
//Private
private final static String TABLENAME = "FIRST";
}
SecondDAO will have more or less the same structure, just changing TABLENAME to "SECOND".
Step 6: Making the manager a factory
DAOManager not only should serve the purpose of serving as a single connection point. Actually, DAOManager should answer this question:
Who is the one responsible of managing the connections to the database?
The individual DAOs shouldn't manage them, but DAOManager. We've answered partially the question, but now we shouldn't let anyone manage other connections to the database, not even the DAOs. But, the DAOs need a connection to the database! Who should provide it? DAOManager indeed! What we should do is making a factory method inside DAOManager. Not just that, but DAOManager will also hand them the current connection!
Factory is a design pattern that will allow us to create instances of a certain superclass, without knowing exactly what child class will be returned.
First, let's create an enum listing our tables.
public enum Table { FIRST, SECOND }
And now, the factory method inside DAOManager:
public GenericDAO getDAO(Table t) throws SQLException
{
try
{
if(this.con == null || this.con.isClosed()) //Let's ensure our connection is open
this.open();
}
catch(SQLException e){ throw e; }
switch(t)
{
case FIRST:
return new FirstDAO(this.con);
case SECOND:
return new SecondDAO(this.con);
default:
throw new SQLException("Trying to link to an unexistant table.");
}
}
Step 7: Putting everything together
We're good to go now. Try the following code:
DAOManager dao = DAOManager.getInstance();
FirstDAO fDao = (FirstDAO)dao.getDAO(Table.FIRST);
SecondDAO sDao = (SecondDAO)dao.getDAO(Table.SECOND);
System.out.println(fDao.count());
System.out.println(sDao.count());
dao.close();
Isn't it fancy and easy to read? Not just that, but when you call close(), you close every single connection the DAOs are using. But how?! Well, they're sharing the same connection, so it's just natural.
Step 8: Fine-tuning our class
We can do several things from here on. To ensure connections are closed and returned to the pool, do the following in DAOManager:
#Override
protected void finalize()
{
try{ this.close(); }
finally{ super.finalize(); }
}
You can also implement methods that encapsulate setAutoCommit(), commit() and rollback() from the Connection so you can have a better handling of your transactions. What I also did is, instead of just holding a Connection, DAOManager also holds a PreparedStatement and a ResultSet. So, when calling close() it also closes both. A fast way of closing statements and result sets!
I hope this guide can be of any use to you in your next project!
I think that if you want to do a simple DAO pattern in plain JDBC you should keep it simple:
public List<Customer> listCustomers() {
List<Customer> list = new ArrayList<>();
try (Connection conn = getConnection();
Statement s = conn.createStatement();
ResultSet rs = s.executeQuery("select * from customers")) {
while (rs.next()) {
list.add(processRow(rs));
}
return list;
} catch (SQLException e) {
throw new RuntimeException(e.getMessage(), e); //or your exceptions
}
}
You can follow this pattern in a class called for example CustomersDao or CustomerManager, and you can call it with a simple
CustomersDao dao = new CustomersDao();
List<Customers> customers = dao.listCustomers();
Note that I'm using try with resources and this code is safe to connections leaks, clean, and straightforward, You probably don't want to follow the full DAO pattern with Factorys, interfaces and all that plumbing that in many cases don't add real value.
I don't think that it's a good idea using ThreadLocals, Bad used like in the accepted answer is a source of classloader leaks
Remember ALWAYS close your resources (Statements, ResultSets, Connections) in a try finally block or using try with resources
I'm working with EJB/JPA and I've created a static method called createDataset that will lookup for a Dataset object. Each time that I have to insert, update, remove, etc an entity, I retrieve a DatasetObject calling DatasetFactory.createDataset() and I call the appropriate method (insert, update, etc).
The codes:
public class DatasetFactory {
public static Dataset createDataset() {
try {
return (Dataset) new InitialContext().lookup("java:global/.../Dataset");
} catch (Exception ex) {
throw new RuntimeException(ex);
}
}
}
public interface Dataset<T> {
void insert(T entity);
//...
}
#Stateless
#EJB(name = "java:global/.../Dataset", beanInterface = Dataset.class)
public class DatasetBean<T> implements Dataset<T> {
#PersistenceContext(type = PersistenceContextType.TRANSACTION)
private EntityManager entityManager;
#Override
public void insert(T entity) {
entityManager.persist(entity);
}
//...
}
Could I have thread safety problems using this aproach? If so, what modifications should I have to do? Should I put the synchronized modifier in the DatasetFactory.createDataset()?
Thanks a lot!
You don't ever have to synchonize any method of an EJB, because the EJB specification specifies that an EJB instance may not be called by two concurrent threads. The EJB container handles the synchonization and thread safety for you. That's one of the points in using EJBs.
From a thread-safety point of view, your code looks good.
But it looks like you are implementing a DAO (Data Access Object) just you are calling your DAO a Dataset instead and it is not a good idea to implement DAOs using EJBs as the EJB container loads and verifies all your EJBs at startup and this can slow things down. And usually EJBs keep only a certain number of EJBs in memory (EJB pool) but if you don't implement your DAOs as EJBs you can create as many of them as you want and Java's GC cleans them up for you.
if your entitymanager is thread-save then there is no risk with using your insert method
I have a java method calling on a web service and making changes to the database based on the response. My task is to eliminate concurrency errors when several users use this application simultaneously.
I was trying to use various types of database locking all day but nothing worked. I finally tried to use synchronized in the process request method and it all worked.
My whole application is single-threaded. Why does synchronized solve this?
Edit: Added Code.
public class ProcessMakePaymentServlet extends HttpServlet {
private DbBean db = new DbBean();
protected synchronized void processRequest(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
// defining variables...
try {
// initialize parameters for invoking remote method
db.connect();
startTransaction(); //autocommit=0; START TRANSACTION;
// process debit
//this method gets the method using a select...for update.
//it then updates it with the new value
successfulDebit = debitAccount(userId, amt);
if (successfulDebit) {
// contact payment gateway by invoking remote pay web service method here.
// create new instances of remote Service objects
org.tempuri.Service service = new org.tempuri.Service();
org.tempuri.ServiceSoap port = service.getServiceSoap();
// invoke the remote method by calling port.pay().
// port.pay() may time out if the remote service is down and throw an exception
successfullyInformedPaymentGateway = port.pay(bankId, bankPwd, payeeId, referenceId, amt);
if (successfullyInformedPaymentGateway) {
// insert payment record
recordPaymentMade(userId, amt, referenceId);
//call to the database to record the transaction. Simple update statement.
out.println("<br/>-----<br/>");
//getTotalPaymentMade does a select to sum all the payment amounts
out.println("Total payment made so far to gateway: " + getTotalPaymentMade());
commitTransaction();// calls COMMIT
db.close(); //connection closed.
} else {
rollbackTransaction();//calls ROLLBACK
db.close();
successfulDebit = false;
out.println("<br/>-----<br/>");
out.println("Incorrect bank details.");
}
} else {
rollbackTransaction();//calls ROLLBACK
db.close();
out.println("<br/>-----<br/>");
out.println("Invalid payment amount.");
}
} catch (Exception ex) {
try {
rollbackTransaction();//calls ROLLBACK
db.close();
} catch (Exception ex1) {
}
}
}
My whole application is single-threaded. Why does synchronized solve
this?
No it is not single threaded. The web service is called by multiple threads receiving the client requests.
The web service method implementation must take care of all synchronization issues same as in a servlet implementation receiving multiple requests, care must be taken to ensure thread safety.
In your case, by adding synchronized you made sure that concurrent processing of web service client request did not result in corruption due to thread issues and you are esentially serializing the client requests (and there of access to the DB).
You have not posted any code to see what you are doing wrong, but since synchronized at the web method level solves your problem, you either did not do the synchronized at the DB level as you say properly or threading issues corrupted common variables at the web service layer accessing the DB.
By synchronizing at the web method, the code is thread safe, but the performance will deteriorate since you will serve 1 client at a time.
Depends on what your requirements are
Just move private DbBean db = new DbBean(); into the servlet method, this should solve the problem concurrency problem:
protected void processRequest(HttpServletRequest request, ...) {
// defining variables...
DbBean db = new DbBean();
...
}
Nevertheless, you should properly clean all database resources in a finally block. A fairly simplified example, but I hope you get what I mean:
protected void processRequest(HttpServletRequest request, ...) {
// defining variables...
DbBean db = null;
boolean commit = false;
try {
db = new DbBean();
} catch (SomeException e) {
commit = false;
} finally{
db.release(commit); /* close database connection => java.sql.Connection#close() */
}
...
}
I guess, DAO is thread safe, does not use any class members.
So can it be used without any problem as a private field of a Servlet ? We need only one copy, and
multiple threads can access it simultaneously, so why bother creating a local variable, right?
"DAO" is just a general term for database abstraction classes. Whether they are threadsafe or not depends on the specific implementation.
This bad example could be called a DAO, but it would get you into trouble if multiple threads call the insert method at the same time.
class MyDAO {
private Connection connection = null;
public boolean insertSomething(Something o) throws Exception {
try {
connection = getConnection()
//do insert on connection.
} finally {
if (connection != null) {
connection.close();
}
}
}
}
So the answer is: if your DAO handles connections and transactions right, it should work.