How to lock a java method to protect multiple invocations - java

I have an application that every 15 minutes or so does a replication from a remote database. It just keeps the two repositories in sync. Once this replication is going it is not possible to do it again. I have setup the following structure but I'm not sure if it is the correct approach.
public class ReplicatorRunner {
private static Lock lock = new ReentrantLock();
public replicate() {
if (lock.tryLock()) {
try {
// long running process
} catch (Exception e) {
} finally {
lock.unlock();
}
} else {
throw new IllegalStateException("already replicating");
}
}
}
public class ReplicatorRunnerInvocator {
public void someMethod() {
try {
ReplicatorRunner replicator = new ReplicatorRunner();
replicator.replicate();
} catch (IllegalStateException e) {
e.printStackTrace();
}
}
}
The ReplicatorRunner is the class owning the method replicate which can only be run one at a time.
Edit.
I need the next call to fail (not block) if the method is already running on any instance.

This looks good. ReentrantLock.tryLock() will only give the lock to one thread, so synchronized is not necessary. It also prevents the blocking inherent in synchronization that you say is a requirement. ReentrantLock is Serializable, so should work across your cluster.
Go for it.

Change public replicate() to public synchronized replicate()
That way replicate will only ever allow access to one thread at a time. You'll also be able to delete the ReentrantLock and all associated code.

I ended up using the following:
public class ReplicatorRunner {
private static Semaphore lock = new Semaphore(1);
public replicate() {
if (lock.tryAcquire()) {
try {
// basic setup
Thread t = new Thread(new Runnable() {
public void run() {
try {
// long running process
} catch Exception (e) {
// handle the exceptions
} finally {
lock.release();
}
}
})
t.start();
} catch (Exception e) {
// in case something goes wrong
// before the thread starts
lock.release();
}
} else {
throw new IllegalStateException("already replicating");
}
}
}
public class ReplicatorRunnerInvocator {
public void someMethod() {
try {
ReplicatorRunner replicator = new ReplicatorRunner();
replicator.replicate();
} catch (IllegalStateException e) {
e.printStackTrace();
}
}
}

Without looking at the specifics of the ReentrantLock, it occurs to me that this prevention of multiple simultaneous replication routines will be limited to a single JVM instance.
If another instance of the class is kicked off in a separate JVM, then you might be in trouble.
Why not put a lock mechanism on the database? i.e. A row in a control table that is set to a value depicting whether or not the replication is busy running, and reset the value when the replication is finished.

take a look at the Semaphore class here or mark the method as synchronized
the thread executing the method at any given time owns a lock on it avoiding other threads to call the method until its execution ends.
Edit: if you want the other threads to fail, you could use a Lock, and test if the lock is avaible by the tryLock method.

Related

Java FutureTask - Multithreaded call to get()

I have the following two methods in a class:
private MyDef myDef;
private FutureTask<MyDef> defFutureTask;
public synchronized void periodEviction() {
myDef = null;
}
public MyDef loadMyItems() {
// if it's not ready use a future - it will block until the results are ready
if (this.myDef == null) { // this will still not be thread safe
Callable<MyDef> callableDef = ()->{ return this.loadFromDatabase(); };
FutureTask<MyDef> defTask = new FutureTask<>(callableDef);
this.defFutureTask = defTask;
defFutureTask.run();
}
try {
// wait until's it's ready
this.myDef = this.qDefFuture.get();
} catch(InterruptedException e) {
log.error(this.getClass(), "Interrupted whilst getting future..");
} catch(ExecutionException e) {
log.error(this.getClass(), "Error when executing callable future");
}
return this.myDef;
}
I wanted to do the following:
1) Do a cache eviction using periodEviction() every one hour or so.
2) Otherwise, use the cached value when db loading is done.
I believe I have misunderstood Java future as I couldn't answer the question, "What happens when Thread A,B,and C all are calling loadMyItems() at the same time?"
So does this mean without something like an executor, this implementation is still not thread safe?
An even simpler approach is to not cache the object at all but just retain the Future.
private CompletableFuture<MyDef> defFuture;
public synchronized void periodEviction() {
// evict by triggering the request anew
defFuture = CompletableFuture.supplyAsync(this::loadFromDatabase);
}
public synchronized Optional<MyDef> loadMyItems() {
try {
return Optional.of(this.defFuture.get());
} catch(InterruptedException e) {
log.error(this.getClass(), "Interrupted whilst getting future..");
} catch(ExecutionException e) {
log.error(this.getClass(), "Error when executing callable future");
}
return Optional.empty();
}
With the caveat that this will trigger the database query every eviction period rather than on demand.
A super simple approach would be to declare loadMyItems as synchronized. But if the class has other methods that access myDef, you would have to declare those synchronized too. Sometimes this results in very coarse-grained locking and slower performance.
If you're looking for the cleanest/fastest code, instead of declaring periodEviction as synchronized, declare myDef as an AtomicReference:
private final AtomicReference<MyDef> myDef = new AtomicReference<>();
Then the body of periodEviction is:
synchronized (myDef) {
myDef.set(null);
}
And the body of loadMyItems is:
synchronized (myDef) {
if (myDef.get() == null) {
// perform initialization steps, ending with:
myDef.set(this.qDefFuture.get());
}
return myDef.get();
}
If many threads call loadMyItems at the same time, myDef will only ever be initialized once, and they will all get the same object returned (unless somehow a call to periodEviction snuck in the middle).

How to know when a thread stops or is stopped?

I have the next code:
Executor exe = Executors.newFixedThreadPool(20);
while (true) {
try {
exe.execute(new DispatcherThread(serverSocket.accept()));
continue;
} catch (SocketException sExcp) {
System.exit(-1);
} catch (Exception excp) {
System.exit(-1);
}
}
For each DispatcherThread I create a connection to the database (it means I have 20 connections), what I need to know is how I can close the connection to the database when the thread is stopped or it stops or finishes its flow.
You cannot directly know when the thread is stopped, the closest thing you have is Thread#isAlive method but it may return false when the run method in the thread has finished but the thread may not be stopped since the JVM cannot guarantee it. But if your DispatcherThread class implements Runnable interface then you can write the clean up at the bottom of the run method.
Skeleton code:
class DispatcherThread implements Runnable {
#Override
public void run() {
try {
//open database connection an such...
//...
//handle the work here...
} catch (...) {
//ALWAYS handle the exceptions
} finally {
//cleanup tasks like close database connection
}
}
}
By the way, Thread suffix is not a good name for a class that technically is not a thread (because it doesn't extend from Thread). Instead, give a proper name according to what should do.
You could close the thread-specific connection at the end of your run() method.
A finally block would ensure that it happened however the run() method exited.
class DispatcherThread extends Runnable {
public void run() {
...
try {
...
}
finally {
// Close the connection
}
}

making only one of the running threads to execute a catch block?

I have a multi threaded Java program and at times it throws an exception which requires some changes to my network settings. The problem I'm facing is that all the running threads try to do it, which causes problems. Is there a way to make only one of the running threads to execute the code in the catch block?
This is my catch block
catch (ElementNotFoundException e)
{
System.out.println("Element not found!");
e.printStackTrace();
IpManager.changeDSLIp();
}
catch (Exception e)
{
e.printStackTrace();
}
//Define a variable to indicate network settings has been done.
public static boolean NETWORK_SETTINGS_DONE = false;
public static Object LOCK = new Object();
public void doSometing() {
try {
} catch (Exception e) {
synchronized (LOCK) {
if (!NETWORK_SETTINGS_DONE) {
//do some changes to your network settings.
NETWORK_SETTINGS_DONE=true;
}
}
}
}
You need a single central service to handle the change to the network settings so you can prevent multiple changes at the same time and still give reasonable feedback. Your threads or more specific your exception handling code needs to transfer the responsibility to this service and may use it to check the current status to handle the exception in an appropriate way (sounds like the threads would have to wait for new settings).
Something like this might do the trick...
} catch (Exception e) {
synchronized (someCommonObject) {
if (!done) {
// do stuff
}
done = true;
}
}
someCommonObject could be "this" but only if all threads are being handled by that object. otherwise pick some other (possibly static) object. The "done" boolean would have to be static / commonly referenced also.

Thread Blocks in server mode

This method notifes an event loop to start processing a message. However, if the event loop is already processing a message then, this method blocks until it receives a notification of completed event processing (which is triggered at the end of the event loop).
public void processEvent(EventMessage request) throws Exception {
System.out.println("processEvent");
if (processingEvent) {
synchronized (eventCompleted) {
System.out.println("processEvent: Wait for Event to completed");
eventCompleted.wait();
System.out.println("processEvent: Event completed");
}
}
myRequest = request;
processingEvent = true;
synchronized (eventReady) {
eventReady.notifyAll();
}
}
This works in client mode. If I switch to server mode and the time spent in the event loop processing the message is too quick, then the method above blocks forever waiting for the event to completed. For some reason the event complete notification is sent after the processingEvent check and before the eventCompleted.wait(). It makes no difference if I remove the output statements. I can not repeat the same problem in client mode.
Why does this only happen in server mode and what can I do to prevent this happening?
Here is the eventReady wait and eventCompleted notification:
public void run() {
try {
while (true) {
try {
synchronized (eventReady) {
eventReady.wait();
}
nx.processEvent(myRequest, myResultSet);
if (processingEvent > 0) {
notifyInterface.notifyEventComplete(myRequest);
}
} catch (InterruptedException e) {
throw e;
} catch (Exception e) {
notifyInterface.notifyException(e, myRequest);
} finally {
processingEvent--;
synchronized (eventCompleted) {
eventCompleted.notifyAll();
}
}
} // End of while loop
} catch (InterruptedException Ignore) {
} finally {
me = null;
}
Here is revised code which seems to work without the deadlock problem - which BTW happened in client mode randomely after about 300 events.
private BlockingQueue<EventMessage> queue = new SynchronousQueue<EventMessage>();
public void processEvent(EventMessage request) throws Exception {
System.out.println("processEvent");
queue.put(request);
}
public void run() {
try {
while (true) {
EventMessage request = null;
try {
request = queue.take();
processingEvent = true;
nx.processEvent(request, myResultSet);
notifyInterface.notifyEventComplete(request);
} catch (InterruptedException e) {
throw e;
} catch (Exception e) {
notifyInterface.notifyException(e, request);
} finally {
if (processingEvent) {
synchronized (eventCompleted) {
processingEvent = false;
eventCompleted.notifyAll();
}
}
}
} // End of while loop
} catch (InterruptedException Ignore) {
} finally {
me = null;
}
}
If you call notifyAll and no thread is wait()ing, the notify is lost.
The correct approach is to always change a state, inside the synchronized block, when calling notify() and always check that state, inside the synchronized block, before calling wait().
Also your use of processingEvent doesn't appear to be thread safe.
Can you provide the code which waits on eventReady and notifies eventCompleted?
Your program can happen to work if your speed up or slow down your application just right e.g. if you use -client, but if you use a different machine, JVM or JVM options it can fail.
There are a number of race conditions in your code. Even declaring processingEvent volatile or using an AtomicBoolean won't help. I would recommend using a SynchronousQueue which will block the event until the processer is ready for it. Something like:
private final BlockingQueue<Request> queue = new SynchronousQueue<Request>();
...
// this will block until the processor dequeues it
queue.put(request);
Then the event processor does:
while (!done) {
// this will block until an event is put-ed to the queue
Request request = queue.take();
process the event ...
}
Only one request will be processed at once and all of the synchronization, etc. will be handled by the SynchronousQueue.
If processingEvent isn't declared volatile or accessed from within a synchronized block then updates made by one thread may not become visible to other threads immediately. It's not clear from your code whether this is the case, though.
The "server" VM is optimised for speed (at the expense of startup time and memory usage) which could be the reason why you didn't encounter this problem when using the "client" VM.
There is a race condition in your code that may be exasperated by using the server VM, and if processingEvent is not volatile then perhaps certain optimizations made by the server VM or its environment are further influencing the problem.
The problem with your code (assuming this method is accessed by multiple threads concurrently) is that between your check of processingEvent and eventCompleted.wait(), another thread can already notify and (I assume) set processingEvent to false.
The simplest solution to your blocking problem is to not try to manage it yourself, and just let the JVM do it by using a shared lock (if you only want to process one event at a time). So you could just synchronize the entire method, for instance, and not worry about it.
A second simple solution is to use a SynchronousQueue (this is the type of situation it is designed for) for your event passing; or if you have more executing threads and want more than 1 element in the queue at a time then you can use an ArrayBlockingQueue instead. Eg:
private SynchronousQueue<EventMessage> queue = new SynchronousQueue<EventMessage>();
public void addEvent(EventMessage request) throws Exception
{
System.out.println("Adding event");
queue.put(request);
}
public void processNextEvent()
{
EventMessage request = queue.take();
processMyEvent(request);
}
// Your queue executing thread
public void run()
{
while(!terminated)
{
processNextEvent();
}
}

Is there a Mutex in Java?

Is there a Mutex object in java or a way to create one?
I am asking because a Semaphore object initialized with 1 permit does not help me.
Think of this case:
try {
semaphore.acquire();
//do stuff
semaphore.release();
} catch (Exception e) {
semaphore.release();
}
if an exception happens at the first acquire, the release in the catch block will increase the permits, and the semaphore is no longer a binary semaphore.
Will the correct way be?
try {
semaphore.acquire();
//do stuff
} catch (Exception e) {
//exception stuff
} finally {
semaphore.release();
}
Will the above code ensure that the semaphore will be binary?
Any object in Java can be used as a lock using a synchronized block. This will also automatically take care of releasing the lock when an exception occurs.
Object someObject = ...;
synchronized (someObject) {
...
}
You can read more about this here: Intrinsic Locks and Synchronization
See this page: http://www.oracle.com/technetwork/articles/javase/index-140767.html
It has a slightly different pattern which is (I think) what you are looking for:
try {
mutex.acquire();
try {
// do something
} finally {
mutex.release();
}
} catch(InterruptedException ie) {
// ...
}
In this usage, you're only calling release() after a successful acquire()
No one has clearly mentioned this, but this kind of pattern is usually not suited for semaphores. The reason is that any thread can release a semaphore, but you usually only want the owner thread that originally locked to be able to unlock. For this use case, in Java, we usually use ReentrantLocks, which can be created like this:
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
private final Lock lock = new ReentrantLock(true);
And the usual design pattern of usage is:
lock.lock();
try {
// do something
} catch (Exception e) {
// handle the exception
} finally {
lock.unlock();
}
Here is an example in the java source code where you can see this pattern in action.
Reentrant locks have the added benefit of supporting fairness.
Use semaphores only if you need non-ownership-release semantics.
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
private final Lock _mutex = new ReentrantLock(true);
_mutex.lock();
// your protected code here
_mutex.unlock();
I think you should try with :
While Semaphore initialization :
Semaphore semaphore = new Semaphore(1, true);
And in your Runnable Implementation
try
{
semaphore.acquire(1);
// do stuff
}
catch (Exception e)
{
// Logging
}
finally
{
semaphore.release(1);
}
Mistake in original post is acquire() call set inside the try loop.
Here is a correct approach to use "binary" semaphore (Mutex):
semaphore.acquire();
try {
//do stuff
} catch (Exception e) {
//exception stuff
} finally {
semaphore.release();
}
Each object's lock is little different from Mutex/Semaphore design.
For example there is no way to correctly implement traversing linked nodes with releasing previous node's lock and capturing next one. But with mutex it is easy to implement:
Node p = getHead();
if (p == null || x == null) return false;
p.lock.acquire(); // Prime loop by acquiring first lock.
// If above acquire fails due to interrupt, the method will
// throw InterruptedException now, so there is no need for
// further cleanup.
for (;;) {
Node nextp = null;
boolean found;
try {
found = x.equals(p.item);
if (!found) {
nextp = p.next;
if (nextp != null) {
try { // Acquire next lock
// while still holding current
nextp.lock.acquire();
}
catch (InterruptedException ie) {
throw ie; // Note that finally clause will
// execute before the throw
}
}
}
}finally { // release old lock regardless of outcome
p.lock.release();
}
Currently, there is no such class in java.util.concurrent, but you can find Mutext implementation here Mutex.java. As for standard libraries, Semaphore provides all this functionality and much more.
To ensure that a Semaphore is binary you just need to make sure you pass in the number of permits as 1 when creating the semaphore. The Javadocs have a bit more explanation.

Categories