Java web-services concurrency issues - java

I have a java method calling on a web service and making changes to the database based on the response. My task is to eliminate concurrency errors when several users use this application simultaneously.
I was trying to use various types of database locking all day but nothing worked. I finally tried to use synchronized in the process request method and it all worked.
My whole application is single-threaded. Why does synchronized solve this?
Edit: Added Code.
public class ProcessMakePaymentServlet extends HttpServlet {
private DbBean db = new DbBean();
protected synchronized void processRequest(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
// defining variables...
try {
// initialize parameters for invoking remote method
db.connect();
startTransaction(); //autocommit=0; START TRANSACTION;
// process debit
//this method gets the method using a select...for update.
//it then updates it with the new value
successfulDebit = debitAccount(userId, amt);
if (successfulDebit) {
// contact payment gateway by invoking remote pay web service method here.
// create new instances of remote Service objects
org.tempuri.Service service = new org.tempuri.Service();
org.tempuri.ServiceSoap port = service.getServiceSoap();
// invoke the remote method by calling port.pay().
// port.pay() may time out if the remote service is down and throw an exception
successfullyInformedPaymentGateway = port.pay(bankId, bankPwd, payeeId, referenceId, amt);
if (successfullyInformedPaymentGateway) {
// insert payment record
recordPaymentMade(userId, amt, referenceId);
//call to the database to record the transaction. Simple update statement.
out.println("<br/>-----<br/>");
//getTotalPaymentMade does a select to sum all the payment amounts
out.println("Total payment made so far to gateway: " + getTotalPaymentMade());
commitTransaction();// calls COMMIT
db.close(); //connection closed.
} else {
rollbackTransaction();//calls ROLLBACK
db.close();
successfulDebit = false;
out.println("<br/>-----<br/>");
out.println("Incorrect bank details.");
}
} else {
rollbackTransaction();//calls ROLLBACK
db.close();
out.println("<br/>-----<br/>");
out.println("Invalid payment amount.");
}
} catch (Exception ex) {
try {
rollbackTransaction();//calls ROLLBACK
db.close();
} catch (Exception ex1) {
}
}
}

My whole application is single-threaded. Why does synchronized solve
this?
No it is not single threaded. The web service is called by multiple threads receiving the client requests.
The web service method implementation must take care of all synchronization issues same as in a servlet implementation receiving multiple requests, care must be taken to ensure thread safety.
In your case, by adding synchronized you made sure that concurrent processing of web service client request did not result in corruption due to thread issues and you are esentially serializing the client requests (and there of access to the DB).
You have not posted any code to see what you are doing wrong, but since synchronized at the web method level solves your problem, you either did not do the synchronized at the DB level as you say properly or threading issues corrupted common variables at the web service layer accessing the DB.
By synchronizing at the web method, the code is thread safe, but the performance will deteriorate since you will serve 1 client at a time.
Depends on what your requirements are

Just move private DbBean db = new DbBean(); into the servlet method, this should solve the problem concurrency problem:
protected void processRequest(HttpServletRequest request, ...) {
// defining variables...
DbBean db = new DbBean();
...
}
Nevertheless, you should properly clean all database resources in a finally block. A fairly simplified example, but I hope you get what I mean:
protected void processRequest(HttpServletRequest request, ...) {
// defining variables...
DbBean db = null;
boolean commit = false;
try {
db = new DbBean();
} catch (SomeException e) {
commit = false;
} finally{
db.release(commit); /* close database connection => java.sql.Connection#close() */
}
...
}

Related

Managing threads accessing a database with Java

I am working on an app that accesses an SQLite database. The problem is the DB gets locked when there is a query to it. Most of the time this is not a problem because the flow of the app is quite linear.
However I have a very long calculation process which is triggered by the user. This process involves multiple calls to the database in between calculations.
I wanted the user to get some visual feedback so I have been using Javafx progressIndicator and a Service from the Javafx.Concurrency framework.
The problem is this leaves the user free to move around the app and potentially triggering other calls to the database.
This caused an exception that the database file is locked.
I would like a way to stop that thread from running when this case happens however I have not been able to find any clear examples online. Most of them are oversimplified and I would like a way which is scalable. I've tried using the cancel() method but this does not guarantee that the thread will be cancelled in time.
Because I am not able to check in all parts of the code for isCancelled sometimes there is a delay between the time the thread is canceled and the time it effectively stops.
So I thought of the following solution but I would like to know if there is a better way in terms of efficiency and avoiding race conditions and hanging.
// Start service
final CalculatorService calculatorService = new CalculatorService();
// Register service with thread manager
threadManager.registerService(CalculatorService);
// Show the progress indicator only when the service is running
progressIndicator.visibleProperty().bind(calculatorService.runningProperty());
calculatorService.setOnSucceeded(new EventHandler<WorkerStateEvent>() {
#Override
public void handle(WorkerStateEvent workerStateEvent) {
System.out.println("SUCCEEDED");
calculatorService.setStopped(true);
}
});
// If something goes wrong display message
calculatorService.setOnFailed(new EventHandler<WorkerStateEvent>() {
#Override
public void handle(WorkerStateEvent workerStateEvent) {
System.out.println("FAILED");
calculatorService.setStopped(true);
}
});
// Restart the service
calculatorService.restart();
This is my service class which I have subclassed to include methods that can be used to set the state of the service (stopped or not stopped)
public class CalculatorService extends Service implements CustomService {
private AtomicBoolean stopped;
private CalculatorService serviceInstance;
public FindBundleService() {
stopped = new AtomicBoolean(false);
instance = this;
}
#Override
protected Task<Results> createTask() {
return new Task<Result>() {
#Override
protected Result call() throws Exception {
try {
Result = calculationMethod(this, serviceInstance);
return Result;
} catch (Exception ex) {
// If the thread is interrupted return
setStopped(true);
return null;
}
}
};
}
#Override
public boolean isStopped() {
return stopped.get();
}
#Override
public void setStopped(boolean stopped) {
this.stopped.set(stopped);
}
}
The service implements this interface which I defined
public interface CustomService {
/**
* Method to check if a service has been stopped
*
* #return
*/
public boolean isStopped();
/**
* Method to set a service as stopped
*
* #param stopped
*/
public void setStopped(boolean stopped);
}
All services must register themselves with the thread manager which is a singleton class.
public class ThreadManager {
private ArrayList<CustomService> services;
/**
* Constructor
*/
public ThreadManager() {
services = new ArrayList<CustomService>();
}
/**
* Method to cancel running services
*/
public boolean cancelServices() {
for(CustomService service : services) {
if(service.isRunning()) {
((Service) service).cancel();
while(!service.isStopped()) {
// Wait for it to stop
}
}
}
return true;
}
/**
* Method to register a service
*/
public void registerService(CustomService service) {
services.add(service);
}
/**
* Method to remove a service
*/
public void removeService(CustomService service) {
services.remove(service);
}
}
In any place in the app if we want to stop the service we call cancelServices(). This will set the state to cancelled I'm checking for this in my calculationMethod() then setting the state to stopped just before returning (effectively ending the thread).
if(task.isCancelled()) {
service.setStopped(true);
return null;
}
(I will assume you are using JDBC for your database queries and that you have control over the code running the queries)
I would centralize all database accesses in a singleton class which would keep the last PreparedStatement running the current query in a single thread ExecutorService. You could then ask that singleton instance things like isQueryRunning(), runQuery(), cancelQuery() that would be synchronized so you can decide to show a message to the user whenever the computation should be canceled, cancel it and start a new one.
Something like (add null checks and catch (SQLException e) blocks):
public class DB {
private Connection cnx;
private PreparedStatement lastQuery = null;
private ExecutorService exec = Executors.newSingleThreadExecutor(); // So you execute only one query at a time
public synchronized boolean isQueryRunning() {
return lastQuery != null;
}
public synchronized Future<ResultSet> runQuery(String query) {
// You might want to throw an Exception here if lastQuery is not null (i.e. a query is running)
lastQuery = cnx.preparedStatement(query);
return exec.submit(new Callable<ResultSet>() {
public ResultSet call() {
try {
return lastQuery.executeQuery();
} finally { // Close the statement after the query has finished and return it to null, synchronizing
synchronized (DB.this) {
lastQuery.close();
lastQuery = null;
}
}
}
// Or wrap the above Future<ResultSet> so that Future.cancel() will actually cancel the query
}
public synchronized void cancelQuery() {
lastQuery.cancel(); // I hope SQLite supports this
lastQuery.close();
lastQuery = null;
}
}
A solution to your problem could be Thead.stop(), which has been deprecated centuries ago (you can find more on the topic here).
To implement the similar behavior it is suggested to use the Thread.interrupt(), which is (in the context of Task) the same as the the Task.cancel().
Solutions:
Fill your calculationMethod with isCancelled() checks.
Try to interrupt an underling operation through an other Thread.
The second solution is probably what you are looking for, but it depends on the actual code of the calculationMethod (which I guess you can't share).
Generic examples for killing long database operations (all of this are performed from another thread):
Kill the connection to the Database (assuming that the Database is smart enough to kill the operation on disconnect and then unlock the database).
Ask for the Database to kill an operation (eg. kill <SPID>).
EDIT:
I hadn't see that that you specified the database to SQLite when I wrote my answer. So to specify the solutions for SQLite:
Killing the connection will not help
Look for the equivalent of sqlite3_interrupt in your java SQLite interface
Maybe you can invoke thread instance t1, t1.interrupt() method, then in the run method of thread( Maybe calculationMethod), add a conditional statement.
public void run() {
while (!Thread.currentThread().isInterrupted()) {
try {
// my code goes here
} catch (IOException ex) {
log.error(ex,ex)
}
}
}
With WAL mode (write-ahead logging) you can do many queries in parallel to the sqlite database
WAL provides more concurrency as readers do not block writers and a
writer does not block readers. Reading and writing can proceed
concurrently.
https://sqlite.org/wal.html
Perhaps these links are of interest to you:
https://stackoverflow.com/a/6654908/1989579
https://groups.google.com/forum/#!topic/sqlcipher/4pE_XAE14TY
https://stackoverflow.com/a/16205732/1989579

Automatic retry of transactions/requests in Dropwizard/JPA/Hibernate

I am currently implementing a REST API web service using the Dropwizard framework together with dropwizard-hibernate respectively JPA/Hibernate (using a PostgreSQL database).
I have a method inside a resource which I annotated with #UnitOfWork to get one transaction for the whole request.
The resource method calls a method of one of my DAOs which extends AbstractDAO<MyEntity> and is used to communicate retrieval or modification of my entities (of type MyEntity) with the database.
This DAO method does the following: First it selects an entity instance and therefore a row from the database. Afterwards, the entity instance is inspected and based on its properties, some of its properties can be altered. In this case, the row in the database should be updated.
I didn't specify anything else regarding caching, locking or transactions anywhere, so I assume the default is some kind of optimistic locking mechanism enforced by Hibernate.
Therefore (I think), when deleting the entity instance in another thread after selecting it from the database in the current one, a StaleStateException is thrown when trying to commit the transaction because the entity instance which should be updated has been deleted before by the other thread.
When using the #UnitOfWork annotation, my understanding is that I'm not able to catch this exception, neither in the DAO method nor in the resource method.
I could now implement an ExceptionMapper<StaleStateException> for Jersey to deliver a HTTP 503 response with a Retry-After header or something like that to the client to tell it to retry its request.
But I'd rather first like to retry to request/transaction (which is basically the same here because of the #UnitOfWork annotation) while still on the server.
Is there any example implementation for a server-sided transaction retry mechanism when using Dropwizard? Like retrying a configurable amount of times (e.g. 3) and then failing with an exception/HTTP 503 response.
How would you implement this? First thing that came to my mind is another annotation like #Retry(exception = StaleStateException.class, count = 3) which I could add to my resource.
Any suggestions on this?
Or is there an alternative solution to my problem considering different locking/transaction-related things?
Alternative approach to this is to use an injection framework - in my case guice - and use method interceptors for this. This is a more generic solution.
DW integreates with guice very smoothly through https://github.com/xvik/dropwizard-guicey
I have a generic implementation that can retry any exception. It works, as yours, on an annotation, as follows:
#Target({ElementType.TYPE, ElementType.METHOD})
#Retention(RetentionPolicy.RUNTIME)
public #interface Retry {
}
The interceptor then does (with docs):
/**
* Abstract interceptor to catch exceptions and retry the method automatically.
* Things to note:
*
* 1. Method must be idempotent (you can invoke it x times without alterint the result)
* 2. Method MUST re-open a connection to the DB if that is what is retried. Connections are in an undefined state after a rollback/deadlock.
* You can try and reuse them, however the result will likely not be what you expected
* 3. Implement the retry logic inteligently. You may need to unpack the exception to get to the original.
*
* #author artur
*
*/
public abstract class RetryInterceptor implements MethodInterceptor {
private static final Logger log = Logger.getLogger(RetryInterceptor.class);
#Override
public Object invoke(MethodInvocation invocation) throws Throwable {
if(invocation.getMethod().isAnnotationPresent(Retry.class)) {
int retryCount = 0;
boolean retry = true;
while(retry && retryCount < maxRetries()) {
try {
return invocation.proceed();
} catch(Exception e) {
log.warn("Exception occured while trying to executed method", e);
if(!retry(e)) {
retry = false;
} {
retryCount++;
}
}
}
}
throw new IllegalStateException("All retries if invocation failed");
}
protected boolean retry(Exception e) {
return false;
}
protected int maxRetries() {
return 0;
}
}
A few things to note about this approach.
The retried method must be designed to be invoked multiple times without any result altering (e.g. if the method stores temporary results in forms of increments, then executing twice might increment twice)
Database exceptions are generally not save for retry. They must open a new connection (in particular when retrying deadlocks which is my case)
Other than that this base implementation simply catches anything and then delegates the retry count and detection to the implementing class. For example, my specific deadlock retry interceptor:
public class DeadlockRetryInterceptor extends RetryInterceptor {
private static final Logger log = Logger.getLogger(MsRetryInterceptor.class);
#Override
protected int maxRetries() {
return 6;
}
#Override
protected boolean retry(Exception e) {
SQLException ex = unpack(e);
if(ex == null) {
return false;
}
int errorCode = ex.getErrorCode();
log.info("Found exception: " + ex.getClass().getSimpleName() + " With error code: " + errorCode, ex);
return errorCode == 1205;
}
private SQLException unpack(final Throwable t) {
if(t == null) {
return null;
}
if(t instanceof SQLException) {
return (SQLException) t;
}
return unpack(t.getCause());
}
}
And finally, i can bind this to guice by doing:
bindInterceptor(Matchers.any(), Matchers.annotatedWith(Retry.class), new MsRetryInterceptor());
Which checks any class, and any method annotated with retry.
An example method for retry would be:
#Override
#Retry
public List<MyObject> getSomething(int count, String property) {
try(Connection con = datasource.getConnection();
Context c = metrics.timer(TIMER_NAME).time())
{
// do some work
// return some stuff
} catch (SQLException e) {
// catches exception and throws it out
throw new RuntimeException("Some more specific thing",e);
}
}
The reason I need an unpack is that old legacy cases, like this DAO impl, already catch their own exceptions.
Note also how the method (a get) retrieves a new connection when invoked twice from my datasource pool, and how no modifications are done inside it (hence: safe to retry)
I hope that helps.
You can do similar things by implementing ApplicationListeners or RequestFilters or similar, however I think this is a more generic approach that could retry any kind of failure on any method that is guice bound.
Also note that guice can only intercept methods when it constructs the class (inject annotated constructor etc.)
Hope that helps,
Artur
I found a pull request in the Dropwizard repository that helped me. It basically enables the possibility of using the #UnitOfWork annotation on other than resource methods.
Using this, I was able to detach the session opening/closing and transaction creation/committing lifecycle from the resource method by moving the #UnitOfWork annotation from the resource method to the DAO method which is responsible for the data manipulation which causes the StaleStateException.
Then I was able to build a retry mechanism around this DAO method.
Examplary explanation:
// class MyEntityDAO extends AbstractDAO<MyEntity>
#UnitOfWork
void tryManipulateData() {
// Due to optimistic locking, this operations cause a StaleStateException when
// committed "by the #UnitOfWork annotation" after returning from this method.
}
// Retry mechanism, implemented wheresoever.
void manipulateData() {
while (true) {
try {
retryManipulateData();
} catch (StaleStateException e) {
continue; // Retry.
}
return;
}
}
// class MyEntityResource
#POST
// ...
// #UnitOfWork can also be used here if nested transactions are desired.
public Response someResourceMethod() {
// Call manipulateData() somehow.
}
Of course one could also attach the #UnitOfWork annotation rather on a method inside a service class which makes use of the DAOs instead of directly applying it to a DAO method. In whatever class the annotation is used, remember to create a proxy of the instances with the UnitOfWorkAwareProxyFactory as described in the pull request.

Servlet that starts a thread only once for every visitor

Hey I want to implement a Java Servlet that starts a thread only once for every single user. Even on refresh it should not start again. My last approach brought me some trouble so no code^^. Any Suggestions for the layout of the servlet?
public class LoaderServlet extends HttpServlet {
// The thread to load the needed information
private LoaderThread loader;
// The last.fm account
private String lfmaccount;
public LoaderServlet() {
super();
lfmaccount = "";
}
#Override
protected void doPost(HttpServletRequest request,
HttpServletResponse response) throws ServletException, IOException {
if (loader != null) {
response.setContentType("text/plain");
response.setHeader("Cache-Control", "no-cache");
PrintWriter out = response.getWriter();
out.write(loader.getStatus());
out.flush();
out.close();
} else {
loader = new LoaderThread(lfmaccount);
loader.start();
request.getRequestDispatcher("WEB-INF/pages/loader.jsp").forward(
request, response);
}
}
#Override
protected void doGet(HttpServletRequest request,
HttpServletResponse response) throws ServletException, IOException {
if (lfmaccount.isEmpty()) {
lfmaccount = request.getSession().getAttribute("lfmUser")
.toString();
}
request.getRequestDispatcher("WEB-INF/pages/loader.jsp").forward(
request, response);
}
}
The jsp uses ajax to regularly post to the servlet and get the status. The thread just runs like 3 minutes, crawling some last.fm data.
What you need here is Session listener. The method sessionCreated() will be called only once for every browser session. So, even if the user refreshes the page, there will be no issues.
You can then go ahead and start the thread for every sessionCreated() method call.
Implement javax.servlet.SingleThreadModel => the service method will not be executed concurrently.
See the servlets specification.
Hypothetically it could be implemented by creating a Map<String,Thread> and then your servlet gets called it tries to look up the map with the sessionId.
Just a sketch:
public class LoaderServlet extends HttpServlet {
private Map<String,Thread> threadMap = new HashMap<>();
protected void doPost(..) {
String sessionId = request.getSesion().getId();
Thread u = null;
if(threadMap.containsKey()) {
u = threadMap.get(sessionId);
} else {
u = new Thread(...);
threadMap.put(sessionId, u);
}
// use thread 'u' as you wish
}
}
Notes:
this uses session id's, not users to associate threads
have a look at ThreadPools, they are great
as a commenter pointed out: synchronization issues are not considered in this sketch
Your first task is to figure out how to identify users uniquely, for instance how would you discern different users behind a proxy/SOHO gateway?
Once you have that down it's basically just having a singleton object serving a user<->thread map to your servlet.
And then we get into the scalability issue as #beny23 mentions in a comment above... I absolutely concur with the point made - your approach is not sound scalability-wise!
Cheers,
As I understand, you want to avoid parallel processing of requests from the same user. I'd suggest you other approach: associate lock with each user and store it in session. And before start processing of users request - try to get that lock. So current thread will wait while other requests from this user are handling. (Use session listener to store lock, when session is created)

DAO pattern multithreading

I work on an multithreading Java application, it is a web server that provide REST services, about 1000 requests per second. I have a relational database, and I use hibernate for accessing it. The database has about 300-400 request per second. I am wondering if DAO pattern is correct, from the multi threading perspective.
So, there is one BaseModel class that looks like this:
public class BaseModelDAO
{
protected Session session;
protected final void commit() {
session.getTransaction().commit();
}
protected final void openSession() {
session = HibernateUtil.getSessionFactory().openSession();
session.beginTransaction();
}
}
Then I have a DAO class for every table from database:
public class ClientDAOHibernate extends BaseModelDAO implements ClientDAO
{
private Logger log = Logger.getLogger(this.getClass());
#Override
public synchronized void addClient(Client client) throws Exception {
try {
openSession();
session.save(client);
commit();
log.debug("client successfully added into database");
} catch (Exception e) {
log.error("error adding new client into database");
throw new Exception("couldn't add client into database");
} finally {
session.close();
}
}
#Override
public synchronized Client getClient(String username, String password) throws Exception {
Client client = null;
try {
openSession();
client = (Client) session.createCriteria(Client.class).createAlias("user", "UserAlias").add(Restrictions.eq("UserAlias.username", username)).add(Restrictions.eq("UserAlias.password", password)).uniqueResult();
commit();
} catch (Exception e) {
log.error("error updating user into database");
throw new DBUsersGetUserException();
} finally {
session.close();
}
return client;
}
}
Here are my questions:
It is ok to open and close the session for every access to db, taking in consideration the number of concurrent requests?
Now DAO classes are accessed directly from application business logic. Should be used a DAO manager insted? If yes, what should be a good design to implement it?
No, your implementation is not a good one:
transactions should be around business logic, not around data access logic: if you want to transfer money from one account to another, you can't have a transaction for the debit operation, and another transaction for the credit operation. The transaction must cover the whole use-case.
by synchronizing every method of the DAO, you forbid two requests to get a client at the same time. You should not have a session field in your DAO. The session should be a local variable of each method. By doing this, your DAO would become stateless, and thus inherently thread-safe, without any need for synchronization
As Michael says in his comment, using programmatic transactions makes the code verbose, complex, and not focused to the business use-case. Use EJBs or Spring to enjoy declarative transaction management and exception handling.

PlayFramework: catch a deadlock and reissue transaction

I am running a Play! application and am debugging a deadlock.
The error messages I see logged from Play! are:
Deadlock found when trying to get lock; try restarting transaction
Could not synchronize database state with session
org.hibernate.exception.LockAcquisitionException: Could not execute JDBC batch update
From the Play! Documentation
Play will automatically manage transactions for you. It will start a transaction for each HTTP request and commit it when the HTTP response is sent. If your code throws an exception, the transaction will automatically rollback.
From the MySQL Documentation
you must write your applications so that they are always prepared to re-issue a transaction if it gets rolled back because of a deadlock.
My question:
How and where in my Play! application can I catch these rolled back transactions, and handle them (choose to reissue them, ignore them, etc...)?
Updated
While I ended up taking the advice in the accepted answer, and looking for the cause of the deadlock (did you know MySQL foreign key constraints increase the chance of deadlocking? Now I do!), here is some code that was working for me to catch and reissue a failed save.
boolean success = false;
int tries = 0;
while (!success && tries++ < 3) {
try {
updated.save();
success = true;
} catch (javax.persistence.PersistenceException e) {
pause(250);
}
}
You can use custom Enhancer and enhance all controllers in your plugin.
For example, enhancer add catch block and restart request invocation. It is more reliable for restart request then only part of logic inside controller:
package plugins;
..
final public class ReliableTxPlugin extends PlayPlugin {
public void enhance(final ApplicationClasses.ApplicationClass applicationClass) throws Exception {
new TxEnhancer().enhanceThisClass(applicationClass);
}
}
package enhancers;
..
class TxEnhancer extends Enhancer {
public static void process(PersistenceException e) throws PersistenceException {
final Throwable cause = e.getCause();
if (cause instanceof OptimisticLockException || cause instanceof StaleStateException) {
final EntityTransaction tx = JPA.em().getTransaction();
if (tx.isActive()) {
tx.setRollbackOnly();
}
Http.Request.current().isNew = false;
throw new Invoker.Suspend(250);
}
throw e;
}
public void enhanceThisClass(final ApplicationClass applicationClass) throws Exception {
// .. general encahcer code
ctMethod.addCatch("enhancers.TxEnhancer.process(_e);return;",
classPool.makeClass("javax.persistence.PersistenceException"), "_e");
//..
}
}
In most cases of a deadlock you can only do a rollback and try a restart. How ever in a Webapp this should be a really unusual case. What you can do in Play is
catch the exception
handle the Transaction in your code
With JPA.em() you will get the EntityManager. You can look into JPAPlugin to see how play handle the transaction. How ever first of all I would evaluate why there is a deadlock and if this is really a realistic situation which can the server handle intelligent.
I built a play1 module based on #xedon work doing the retry for you.

Categories