Assume we have to undertake a transfer between any 2 accounts(among hunders out there) as part of a transaction.
And there would be multiple similar transactions running concurrently in a typical multi-threaded environment.
Usual convention would be as below(maintaining the lock order as per a pre-designed convention):
lock account A
lock account B
transfer(A,B)
release B
release A
Is there any way to attempt the locks and release as an atomic operation?
Yes there is: you need to lock the locks under a lock. In other words, you need to create a lock hierarchy. But this solution is not very efficient because it decreases lock granularity.
It looks like in your case it would be sufficient to always take locks in the same order. For example always lock user with lesser ID first.
Transaction is atomic by ACID definition (A - for atomicity). Isolation (at least READ_COMMITED one) guarantee that other transaction that may occur for account A at the same time will wait while previous started transaction will be finished. So actually, you don't need to lock them explicitly as they will lock by internal implementation (database for example) and that locks will be more efficient as they can use optimistic locking techniques.
But this is only true if they all participating in one transactional context (like in JTA environment for example). In such environment you could just start transaction in the beginning of transfer method and no need for locking Account A and Account B.
In case, that they are not in the same transactional context, you can introduce some another locking object but this will significantly reduce performance as threads will be locked even though one is working with accounts A and B and another one with accounts C and D. There are techniques in how to avoid this situation (see ConcurentHashMap for example, where
locks are on baskets - and not on the whole object).
But with your particular example answer could only be some general thoughts as example is to short to examine more. I think variant with locking account A and account B in particular order (should be very care with that - as this could lead to potential deadlocks. And assuming there is not just transfer method that could work with them - it is really high risky) is normal for given situation.
You can try to use the following code.
Note: it only work for two locks and I'm unsure how to make it scale to more locks.
The idea is that you take the first lock and you try to take the second one.
If it fails, we know that 1 lock is free right now, but the other is busy.
Thus we release the first lock and you invert them, so we will lock on the one that was busy and try to take the one that (WAS!) free, if it is still free.
Rinse and repeat.
There is a statistically impossibility that this code will go in StackOverflow,
I think handling it and giving an error is better than making it loop, since it would be a signal that something somewhere is going very wrong.
public static void takeBoth(ReentrantLock l1,ReentrantLock l2) {
l1.lock();
if(l2.tryLock()) {return;}
l1.unlock();
try{takeBoth(l2,l1);}
catch(StackOverflowError e) {throw new Error("??");}
}
public static void releaseBoth(ReentrantLock l1,ReentrantLock l2){
if(!l1.isHeldByCurrentThread()) {l1.unlock();}//this will fail: IllegarMonitorState exception
l2.unlock();//this may fail, in that case we did not touch l1.
l1.unlock();
}
In my spring web app, I have a service method containing a block of code guarded by a lock.
Only a single thread can enter the code block at a time.
This works fine in a non clustered environment but fails in a clustered one. In a clustered environment, within a node, synchronization happens but among different nodes, code block is executed in parallel. Is this because in each node a separate Lock object is created ?
Can anyone advise me ?
Code Sample
//Service Class
#Service
class MyServiceClass {
private final Lock globalLock;
#Autowired
public MyServiceClass(#Qualifier("globalLock") final Lock globalLock){
this.globalLock = globalLock;
}
public void myServiceMethod(){
...
globalLock.lock();
try {
...
}
finally {
globalLock.unlock();
}
...
}
}//End of MyServiceClass
//Spring Configuration XML
<bean id="globalLock" class="java.util.concurrent.locks.ReentrantLock" scope="singleton" />
If you want to synchronize objects in a cluster environment, this meaning many VMs involved, your solution would involve some kind of communication between the different VMs involved.
In this case, it will require some imagination to get the thing done: you will need the mutual exclusion implemented on some object that is common to all the VMs involved, and that may escalate when you put additional machines into the cluster. Have you thought some solution based on JNDI? Here you have something on it, but I am afraid it looks rather an academic discussion:
http://jsr166-concurrency.10961.n7.nabble.com/Lock-implementation-td2180.html
There is always the chance to implement something based on DB mechanisms (always thinking that your DB is a central and common resource to all the nodes in the cluster). You could devise something based on some SELECT FOR UPDATE mechanism implemented in your database, over some table used only for synchronization...
You have an interesting problem! :) Good luck
You are right, the reason is that each node has it's own lock. To solve this, consider introducing in the database a table SERVICE_LOCKS, with the columns service class name, service Id, lock status and acquisition timestamp.
For service Id make each service generate a unique distributed Id using UUID.randomUUID().
To acquire the locks, issue an update to try to grab it, and then query it to see if you have the lock. But don't do select, check and then update. Locks older than a certain amount of time should be not taken into account.
This is an implementation of to the coarse grained lock design pattern, where an application level pessimistic lock is acquired to lock shared resources.
Depending on the business logic on the services and the type of transaction manager you use, increasing the isolation level of the service method to REPEATABLE_READ might be an option.
For a solution that does not involve the database, have a look at a framework for distributed concurrent processing based on the Actor concurrency model - The Akka Framework (click Remoting button).
We are planning to use hibernate framework for an application which is something like an e-commerce application.
We have a requirement in which, if an user checks out an item and proceeds to the gateway, we will lock the item for 7 minutes, after which the item is released.
How can we design the above requirement? Ideas are appreciated.
You probably don't want to use a database level lock for this. Most databases and database configurations are not built around the concept of long-held locks.
The most generic approach I can think of is to build some kind of locking service in your application. The locking service has synchronized methods like tryObtainLock which will obtain the lock and return true if the lock is available or return false if the lock was not available (at which point you probably send an error to the user informing them the item is locked).
The locking service can then store in its own table a list of locks, who asked for them, and when they were obtained. Every so often you can check to see if any locks have been held for over 7 minutes and if so, release them and notify the person who obtained the lock that they no longer have it.
We have an application which is currently threaded (about 50 threads) to process transactions.
We have setup a redis database and using DECRBY to deduct credits from a users account.
Here is an example of the process:
1. Get amount of credits for this transaction
2. Get current credit amount from from Redis: GET <key>
3. If amount of credits exceeds amount cost of transaction continue
4. DECRBY the transaction amount from Redis.
The issue i have here is obvious, when the users credits reaches 0, it does fail the transaction (good), but it lets about 10-20 transactions through because of the threading.
I have thought of setting up WATCH, MULTI, EXEC with Redis, and then retry, but won't this cause a bottleneck (I think its called race conditions) because the threads will be constantly fighting to complete the transaction.
Any suggestions ?
Locking is what you need. Since DB locks are expensive, you can implement a simple locking scheme in Redis using SETNX and also avoid race conditions. It's well explained here - http://redis.io/commands/setnx. But you still need to implement retries at application level.
It isn't the most conventional way of doing it IMO (most usual way is probably to use a lock in a RDBMS), but using WATCH, MULTI, EXEC looks akin to CAS and it doesn't seem too weird to me.
I'd assume that the author of Redis intended WATCH to be used like this. Performance implication obviously depends on how this thing is implemented (which I don't know), but my bet is that it will perform pretty good.
This is because it seems likely that there will be very less to almost no contention for the same keys in your situation (what is the chance of a user frantically issuing transactions for him/herself?), the success rate for the first swap operation will be really good. So the retry will only happen in very rare cases. Since Redis seems to be a credible framework, they also probably know what they are doing (i.e. less contention = easy job for Redis, thus it can probably handle it!).
You could try to use Redis based Lock object implementation for Java provided by Redisson framework instead of retrying with WATCH-MULTI commands. Working with WATCH-MULTI involves unnecessary requests to Redis during each attempt which works much slower than already acquired lock.
Here is the code sample:
Lock lock = redisson.getLock("transationLock");
lock.lock();
try {
... // instructions
} finally {
lock.unlock();
}
I've been working on this for a few days now, and I've found several solutions but none of them incredibly simple or lightweight. The problem is basically this: We have a cluster of 10 machines, each of which is running the same software on a multithreaded ESB platform. I can deal with concurrency issues between threads on the same machine fairly easily, but what about concurrency on the same data on different machines?
Essentially the software receives requests to feed a customer's data from one business to another via web services. However, the customer may or may not exist yet on the other system. If it does not, we create it via a web service method. So it requires a sort of test-and-set, but I need a semaphore of some sort to lock out the other machines from causing race conditions. I've had situations before where a remote customer was created twice for a single local customer, which isn't really desirable.
Solutions I've toyed with conceptually are:
Using our fault-tolerant shared file system to create "lock" files which will be checked for by each machine depending on the customer
Using a special table in our database, and locking the whole table in order to do a "test-and-set" for a lock record.
Using Terracotta, an open source server software which assists in scaling, but uses a hub-and-spoke model.
Using EHCache for synchronous replication of my in-memory "locks."
I can't imagine that I'm the only person who's ever had this kind of problem. How did you solve it? Did you cook something up in-house or do you have a favorite 3rd-party product?
you might want to consider using Hazelcast distributed locks. Super lite and easy.
java.util.concurrent.locks.Lock lock = Hazelcast.getLock ("mymonitor");
lock.lock ();
try {
// do your stuff
}finally {
lock.unlock();
}
Hazelcast - Distributed Queue, Map, Set, List, Lock
We use Terracotta, so I would like to vote for that.
I've been following Hazelcast and it looks like another promising technology, but can't vote for it since I've not used it, and knowing that it uses a P2P based system at its heard, I really would not trust it for large scaling needs.
But I have also heard of Zookeeper, which came out of Yahoo, and is moving under the Hadoop umbrella. If you're adventurous trying out some new technology this really has lots of promise since it's very lean and mean, focusing on just coordination. I like the vision and promise, though it might be too green still.
http://www.terracotta.org
http://wiki.apache.org/hadoop/ZooKeeper
http://www.hazelcast.com
Terracotta is closer to a "tiered" model - all client applications talk to a Terracotta Server Array (and more importantly for scale they don't talk to one another). The Terracotta Server Array is capable of being clustered for both scale and availability (mirrored, for availability, and striped, for scale).
In any case as you probably know Terracotta gives you the ability to express concurrency across the cluster the same way you do in a single JVM by using POJO synchronized/wait/notify or by using any of the java.util.concurrent primitives such as ReentrantReadWriteLock, CyclicBarrier, AtomicLong, FutureTask and so on.
There are a lot of simple recipes demonstrating the use of these primitives in the Terracotta Cookbook.
As an example, I will post the ReentrantReadWriteLock example (note there is no "Terracotta" version of the lock - you just use normal Java ReentrantReadWriteLock)
import java.util.concurrent.locks.*;
public class Main
{
public static final Main instance = new Main();
private int counter = 0;
private ReentrantReadWriteLock rwl = new ReentrantReadWriteLock(true);
public void read()
{
while (true) {
rwl.readLock().lock();
try {
System.out.println("Counter is " + counter);
} finally {
rwl.readLock().unlock();
}
try { Thread.currentThread().sleep(1000); } catch (InterruptedException ie) { }
}
}
public void write()
{
while (true) {
rwl.writeLock().lock();
try {
counter++;
System.out.println("Incrementing counter. Counter is " + counter);
} finally {
rwl.writeLock().unlock();
}
try { Thread.currentThread().sleep(3000); } catch (InterruptedException ie) { }
}
}
public static void main(String[] args)
{
if (args.length > 0) {
// args --> Writer
instance.write();
} else {
// no args --> Reader
instance.read();
}
}
}
I recommend to use Redisson. It implements over 30 distributed data structures and services including java.util.Lock. Usage example:
Config config = new Config();
config.addAddress("some.server.com:8291");
Redisson redisson = Redisson.create(config);
Lock lock = redisson.getLock("anyLock");
lock.lock();
try {
...
} finally {
lock.unlock();
}
redisson.shutdown();
I was going to advice on using memcached as a very fast, distributed RAM storage for keeping logs; but it seems that EHCache is a similar project but more java-centric.
Either one is the way to go, as long as you're sure to use atomic updates (memcached supports them, don't know about EHCache). It's by far the most scalable solution.
As a related datapoint, Google uses 'Chubby', a fast, RAM-based distributed lock storage as the root of several systems, among them BigTable.
I have done a lot of work with Coherence, which allowed several approaches to implementing a distributed lock. The naive approach was to request to lock the same logical object on all participating nodes. In Coherence terms this was locking a key on a Replicated Cache. This approach doesn't scale that well because the network traffic increases linearly as you add nodes. A smarter way was to use a Distributed Cache, where each node in the cluster is naturally responsible for a portion of the key space, so locking a key in such a cache always involved communication with at most one node. You could roll your own approach based on this idea, or better still, get Coherence. It really is the scalability toolkit of your dreams.
I would add that any half decent multi-node network based locking mechanism would have to be reasonably sophisticated to act correctly in the event of any network failure.
Not sure if I understand the entire context but it sounds like you have 1 single database backing this? Why not make use of the database's locking: if creating the customer is a single INSERT then this statement alone can serve as a lock since the database will reject a second INSERT that would violate one of your constraints (e.g. the fact that the customer name is unique for example).
If the "inserting of a customer" operation is not atomic and is a batch of statements then I would introduce (or use) an initial INSERT that creates some simple basic record identifying your customer (with the necessary UNIQUEness constraints) and then do all the other inserts/updates in the same transaction. Again the database will take care of consistency and any concurrent modifications will result in one of them failing.
I made a simple RMI service with two methods: lock and release. both methods take a key (my data model used UUIDs as pk so that was also the locking key).
RMI is a good solution for this because it's centralized. you can't do this with EJBs (specialially in a cluster as you don't know on which machine your call will land). plus, it's easy.
it worked for me.
If you can set up your load balancing so that requests for a single customer always get mapped to the same server then you can handle this via local synchronization. For example, take your customer ID mod 10 to find which of the 10 nodes to use.
Even if you don't want to do this in the general case your nodes could proxy to each other for this specific type of request.
Assuming your users are uniform enough (i.e. if you have a ton of them) that you don't expect hot spots to pop up where one node gets overloaded, this should still scale pretty well.
You might also consider Cacheonix for distributed locks. Unlike anything else mentioned here Cacheonix support ReadWrite locks with lock escalation from read to write when needed:
ReadWriteLock rwLock = Cacheonix.getInstance().getCluster().getReadWriteLock();
Lock lock = rwLock.getWriteLock();
try {
...
} finally {
lock.unlock();
}
Full disclosure: I am a Cacheonix developer.
Since you are already connecting to a database, before adding another infra piece, take a look at JdbcSemaphore, it is simple to use:
JdbcSemaphore semaphore = new JdbcSemaphore(ds, semName, maxReservations);
boolean acq = semaphore.acquire(acquire, 1, TimeUnit.MINUTES);
if (acq) {
// do stuff
semaphore.release();
} else {
throw new TimeoutException();
}
It is part of spf4j library.
Back in the day, we'd use a specific "lock server" on the network to handle this. Bleh.
Your database server might have resources specifically for doing this kind of thing. MS-SQL Server has application locks usable through the sp_getapplock/sp_releaseapplock procedures.
We have been developing an open source, distributed synchronization framework, currently DistributedReentrantLock and DistributedReentrantReadWrite lock has been implemented, but still are in testing and refactoring phase. In our architecture lock keys are devided in buckets and each node is resonsible for certain number of buckets. So effectively for a successfull lock requests, there is only one network request. We are also using AbstractQueuedSynchronizer class as local lock state, so all the failed lock requests are handled locally, this drastically reduces network trafic.
We are using JGroups (http://jgroups.org) for group communication and Hessian for serialization.
for details, please check out http://code.google.com/p/vitrit/.
Please send me your valuable feedback.
Kamran