I'm looking for a recommendation on how to make this code thread-safe with locks in Java. I know there are a lot of gotchas with locks; obscure problems, race-conditions, etc that can pop up. Here is the basic idea of what I'm trying to achieve, implemented rather naïvely:
public class MultipleThreadWriter {
boolean isUpgrading=false;
boolean isWriting=false;
public void writeData(String uniqueId) {
if (isUpgrading)
//block until isUpgrading is false
isWriting = true;
{
//do write stuff
}
isWriting = false;
}
public void upgradeSystem() {
if (isWriting)
//block until isWriting is false
isUpgrading = true;
{
//do updates
}
isUpgrading = false;
}
}
The basic idea is that multiple threads are allowed to write data simultaneously. It doesn't matter, since no two threads will ever be writing to data pertaining to the same uniqueId. However, the "system upgrade" manipulates data for all uniqueIds, so it must block (wait in line) until no data is being written before it can start, at which point it blocks all writes until it is finished. (It is definitely not a consumer/producer pattern going on here- upgrading occurs arbitrarily, i.e. has no relation to the data being written.)
This sounds like a good application for a readers-writer lock.
However, in this case your "readers" are the small update tasks that can all run concurrently, and your "writer" is the system upgrade task.
There's an implementation of this in the Java standard library:
java.util.concurrent.locks.ReentrantReadWriteLock
The lock has fair and non-fair modes. If you want the system upgrade to run as soon as possible after it's scheduled, then use the fair mode of the lock. If you want the upgrade to be applied during idle time (i.e., wait until there are no small updates going on), then you can use the non-fair mode instead.
Since this is a bit of an unorthodox application of the readers-writer lock (your readers are actually writing too!), make sure to comment this well in your code. You might even consider writing a wrapper around the ReentrantReadWriteLock class that provides localUpdateLock vs globalUpdateLock methods, which delegate to the readLock and writeLock, respectively.
Based on answer from #DaoWen , this is my untested solution.
public class MultipleThreadWriter {
private final ReentrantReadWriteLock rwl = new ReentrantReadWriteLock();
private final Lock r = rwl.readLock();
private final Lock w = rwl.writeLock();
public void writeData() {
r.lock();
try {
//do write stuff
} finally {
r.unlock();
}
}
public void upgradeSystem() {
w.lock();
try {
//do updates
} finally {
w.unlock();
}
}
}
Related
I have a utility class as follows:
public class MetaUtility {
private static final SparseArray<MetaInfo> metaInfo = new SparseArray<>();
public static void flush() {
metaInfo.clear();
}
public static void addMeta(int key, MetaInfo info) {
if(info == null) {
throw new NullPointerException();
}
metaInfo.append(key, info);
}
public static MetaInfo getMeta(int key) {
return metaInfo.get(key);
}
}
This class is very simple and I wanted to have a "central" container to be used across classes/activities.
The issue is threading.
Right now it is populated (i.e the addMeta is called) only in 1 place in the code (not in the UI thread) and that is not going to change.
The getter is accessed by UI thread and in some cases by background threads.
Carefully reviewing the code I don't think that I would end up with the case that the background thread would add elements to the sparse array while some other thread would try to access it.
But this is very tricky for someone to know unless he knew the code very well.
My question is, how could I design my class so that I can safely use it from all threads including UI thread?
I can't just add a synchronized or make it block because that would block the UI thread. What can I do?
You should just synchronize on your object, because what your class is right now is just a wrapper class around a SparseArray. If there are thread level blocking issues, they would be from misuse of this object (well, I guess class considering it only exposes public static methods) in some other part of your project.
First shoot can be with synchronized.
#Jim What about the thread scheduling latency?
Android scheduler is based on Linux and it is known as a completely fair scheduler (CFS). It is "fair" in the sense that it tries to balance the execution of tasks not only based on the priority of the thread but also by tracking the amount of execution time that has been given to a thread.
If you'll see "Skipped xx frames! The application may be doing too much work on its main thread", then need some optimisations.
If you have uncontended lock you should not be afraid of using synchronized. In this case lock should be thin, which means that it would not pass blocked thread to OS scheduler, but would try to acquire lock again a few instructions after. But if you still would want to write non-blocking implementation, then you could use AtomicReference for holding the SparseArray<MetaInfo> array and update it with CAS.
The code might be smth like this:
static AtomicReference<SparseArray<MetaInfo>> atomicReference = new AtomicReference<>();
public static void flush() {
atomicReference.set(new SparseArray<MetaInfo>);
}
public static void addMeta(int key, MetaInfo info) {
if(info == null) {
throw new NullPointerException();
}
do {
SparseArray<MetaInfo> current = atomicReference.get();
SparseArray<MetaInfo> newArray = new SparseArray<MetaInfo>(current);
// plus add a new info
} while (!atomicReference.compareAndSet(current, newArray));
}
public static MetaInfo getMeta(int key) {
return atomicReference.get().get(key);
}
I want to create a thread to make some HTTP requests every few seconds and is easy to pause and resume at a moments notice.
Is the way below preferred, safe and efficient?
public class Facebook extends Thread {
public boolean running = false;
public void startThread() {
running = true;
}
public void stopThread() {
running = false;
}
public void run() {
while(true) {
while(running) {
//HTTP Calls
Facebook.sleep(2000);
}
}
}
}
Your Code:
In your example, the boolean should be volatile boolean to operate properly. The other issue is if running == false your thread just burns CPU in a tight loop, and you probably would want to use object monitors or a Condition to actually wait idly for the flag to become true again.
Timer Option:
I would suggest simply creating a Timer for this. Each Timer implicitly gets its own thread, which is what you are trying to accomplish.
Then create a TimerTask (FacebookTask below is this) that performs your task and from your main control class, no explicit threads necessary, something like:
Timer t;
void resumeRequests () {
if (t == null) { // otherwise its already running
t = new Timer();
t.scheduleAtFixedRate(new FacebookTask(), 0, 2000);
}
}
void pauseRequests () {
if (t != null) { // otherwise its not running
t.cancel();
t = null;
}
}
Note that above, resumeRequests() will cause a request to happen immediately upon resume (as specified by the 0 delay parameter); you could theoretically increase the request rate if you paused and resumed repeatedly in less than 2000ms. This doesn't seem like it will be an issue to you; but an alternative implementation is to keep the timer running constantly, and have a volatile bool flag in the FacebookTask that you can set to enable/disable it (so if it's e.g. false it doesn't make the request, but continues checking every 2000ms). Pick whichever makes the most sense for you.
Other Options:
You could also use a scheduled executor service as fge mentions in comments. It has more features than a timer and is equally easy to use; they'll also scale well if you need to add more tasks in the future.
In any case there's no real reason to bother with Threads directly here; there are plenty of great tools in the JDK for this job.
The suggestion to using a Timer would work better. If you want to do the threading manually, though, then something more like this would be safer and better:
class Facebook implements Runnable {
private final Object monitor = new Object();
public boolean running = false;
public void startThread() {
synchronized (monitor) {
running = true;
monitor.notifyAll();
}
}
public void stopThread() {
synchronized (monitor) {
running = false;
}
}
#Override
public void run() {
while(true) {
try {
synchronized (monitor) {
// Wait until somebody calls startThread()
while (!running) {
monitor.wait();
}
}
//HTTP Calls
Thread.sleep(2000);
} catch (InterruptedException ie) {
break;
}
}
}
}
Note in particular:
You should generally implement Runnable instead of subclassing Thread, then use that Runnable to specify the work for a generic Thread. The work a thread performs is not the same thing as the thread itself, so this yields a better model. It's also more flexible if you want to be able to perform the same work by other means (e.g. a Timer).
You need to use some form of synchronization whenever you want two threads to exchange data (such as the state of the running instance variable). There are classes, AtomicBoolean for example, that have such synchronization built in, but sometimes there are advantages to synchronizing manually.
In the particular case that you want one thread to stop work until another thread instructs it to continue, you generally want to use Object.wait() and a corresponding Object.notify() or Object.notifyAll(), as demonstrated above. The waiting thread consumes zero CPU until it is signaled. Since you need to use manual synchronization with wait/notify anyway, there would be no additional advantage to be gained by using an AtomicBoolean.
Edited to add:
Since apparently there is some confusion about how to use this (or the original version, I guess), here's an example:
class MyClass {
static void main(String[] args) {
FaceBook fb = new FaceBook();
Thread fbThread = new Thread(fb);
fbThread.start();
/* ... do stuff ... */
// Pause the FaceBook thread:
fb.stopThread();
/* ... do more stuff ... */
// Resume the FaceBook thread:
fb.startThread();
// etc.
// When done:
fbThread.interrupt(); // else the program never exits
}
}
I Would recommend you to use a guarded blocks and attach the thread to a timer
We need to lock a method responsible for loading database date into a HashMap based cache.
A possible situation is that a second thread tries to access the method while the first method is still loading cache.
We consider the second thread's effort in this case to be superfluous. We would therefore like to have that second thread wait until the first thread is finished, and then return (without loading the cache again).
What I have works, but it seems quite inelegant. Are there better solutions?
private static final ReentrantLock cacheLock = new ReentrantLock();
private void loadCachemap() {
if (cacheLock.tryLock()) {
try {
this.cachemap = retrieveParamCacheMap();
} finally {
cacheLock.unlock();
}
} else {
try {
cacheLock.lock(); // wait until thread doing the load is finished
} finally {
try {
cacheLock.unlock();
} catch (IllegalMonitorStateException e) {
logger.error("loadCachemap() finally {}",e);
}
}
}
}
I prefer a more resilient approach using read locks AND write locks. Something like:
private static final ReadWriteLock cacheLock = new ReentrantReadWriteLock();
private static final Lock cacheReadLock = cacheLock.readLock();
private static final Lock cacheWriteLock = cacheLock.writeLock();
private void loadCache() throws Exception {
// Expiry.
while (storeCache.expired(CachePill)) {
/**
* Allow only one in - all others will wait for 5 seconds before checking again.
*
* Eventually the one that got in will finish loading, refresh the Cache pill and let all the waiting ones out.
*
* Also waits until all read locks have been released - not sure if that might cause problems under busy conditions.
*/
if (cacheWriteLock.tryLock(5, TimeUnit.SECONDS)) {
try {
// Got a lock! Start the rebuild if still out of date.
if (storeCache.expired(CachePill)) {
rebuildCache();
}
} finally {
cacheWriteLock.unlock();
}
}
}
}
Note that the storeCache.expired(CachePill) detects a stale cache which may be more than you are wanting but the concept here is the same, establish a write lock before updating the cache which will deny all read attempts until the rebuild is done. Also, manage multiple attempts at write in a loop of some sort or just drop out and let the read lock wait for access.
A read from the cache now looks like this:
public Object load(String id) throws Exception {
Store store = null;
// Make sure cache is fresh.
loadCache();
try {
// Establish a read lock so we do not attempt a read while teh cache is being updated.
cacheReadLock.lock();
store = storeCache.get(storeId);
} finally {
// Make sure the lock is cleared.
cacheReadLock.unlock();
}
return store;
}
The primary benefit of this form is that read access does not block other read access but everything stops cleanly during a rebuild - even other rebuilds.
You didn't say how complicated your structure is and how much concurrency / congestion you need. There are many ways to address your need.
If your data is simple, use a ConcurrentHashMap or similar to hold your data. Then just read and write in threads regardlessly.
Another alternative is to use actor model and put read/write on the same queue.
If all you need is to fill a read-only map which is initialized from database once requested, you could use any form of double-check locking which may be implemented in a number of ways. The easiest variant would be the following:
private volatile Map<T, V> cacheMap;
public void loadCacheMap() {
if (cacheMap == null) {
synchronized (this) {
if (cacheMap == null) {
cacheMap = retrieveParamCacheMap();
}
}
}
}
But I would personally prefer to avoid any form of synchronization here and just make sure that the initialization is done before any other thread can access it (for example in a form of init method in a DI container). In this case you would even avoid overhead of volatile.
EDIT: The answer works only when initial load is expected. In case of multiple updates, you could try to replace the tryLock by some other form of test and test-and-set, for example using something like this:
private final AtomicReference<CountDownLatch> sync =
new AtomicReference<>(new CountDownLatch(0));
private void loadCacheMap() {
CountDownLatch oldSync = sync.get();
if (oldSync.getCount() == 0) { // if nobody updating now
CountDownLatch newSync = new CountDownLatch(1);
if (sync.compareAndSet(oldSync, newSync)) {
cacheMap = retrieveParamCacheMap();
newSync.countDown();
return;
}
}
sync.get().await();
}
I have a class that has the object "Card". This class keeps checking to see if the object is not null anymore. Only one other thread can update this object. Should I just do it like the code below? Use volatile?Syncronized? lock (which I dont know how to use really)? What do you recommend as easiest solution?
Class A{
public Card myCard = null;
public void keepCheck(){
while(myCard == null){
Thread.sleep(100)
}
//value updated
callAnotherMethod();
}
Another thread has following:
public void run(){
a.myCard = new Card(5);
}
What do you suggest?
You should use a proper wait event (see the Guarded Block tutorial), otherwise you run the risk of the "watching" thread seeing the reference before it sees completely initialized member fields of the Card. Also wait() will allow the thread to sleep instead of sucking up CPU in a tight while loop.
For example:
Class A {
private final Object cardMonitor = new Object();
private volatile Card myCard;
public void keepCheck () {
synchronized (cardMonitor) {
while (myCard == null) {
try {
cardMonitor.wait();
} catch (InterruptedException x) {
// either abort or ignore, your choice
}
}
}
callAnotherMethod();
}
public void run () {
synchronized (cardMonitor) {
myCard = new Card(5);
cardMonitor.notifyAll();
}
}
}
I made myCard private in the above example. I do recommend avoiding lots of public fields in a case like this, as the code could end up getting messy fast.
Also note that you do not need cardMonitor -- you could use the A itself, but having a separate monitor object lets you have finer control over synchronization.
Beware, with the above implementation, if run() is called while callAnotherMethod() is executing, it will change myCard which may break callAnotherMethod() (which you do not show). Moving callAnotherMethod() inside the synchronized block is one possible solution, but you have to decide what the appropriate strategy is there given your requirements.
The variable needs to be volatile when modifying from a different thread if you intend to poll for it, but a better solution is to use wait()/notify() or even a Semaphore to keep your other thread sleeping until myCard variable is initialized.
Looks like you have a classic producer/consumer case.
You can handle this case using wait()/notify() methods. See here for an example: How to use wait and notify in Java?
Or here, for more examples: http://www.programcreek.com/2009/02/notify-and-wait-example/
Maybe this question has been asked many times before, but I never found a satisfying answer.
The problem:
I have to simulate a process scheduler, using the round robin strategy. I'm using threads to simulate processes and multiprogramming; everything works fine with the JVM managing the threads. But the thing is that now I want to have control of all the threads so that I can run each thread alone by a certain quantum (or time), just like real OS processes schedulers.
What I'm thinking to do:
I want have a list of all threads, as I iterate the list I want to execute each thread for their corresponding quantum, but as soon the time's up I want to pause that thread indefinitely until all threads in the list are executed and then when I reach the same thread again resume it and so on.
The question:
So is their a way, without using deprecated methods stop(), suspend(), or resume(), to have this control over threads?
Yes, there is:
Object.wait( ), Object.notify() and a bunch of other much nicer synchronization primitives in java.util.concurrent.
Who said Java is not low level enough?
Here is my 3 minute solution. I hope it fits your needs.
import java.util.ArrayList;
import java.util.List;
public class ThreadScheduler {
private List<RoundRobinProcess> threadList
= new ArrayList<RoundRobinProcess>();
public ThreadScheduler(){
for (int i = 0 ; i < 100 ; i++){
threadList.add(new RoundRobinProcess());
new Thread(threadList.get(i)).start();
}
}
private class RoundRobinProcess implements Runnable{
private final Object lock = new Object();
private volatile boolean suspend = false , stopped = false;
#Override
public void run() {
while(!stopped){
while (!suspend){
// do work
}
synchronized (lock){
try {
lock.wait();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
return;
}
}
}
}
public void suspend(){
suspend = true;
}
public void stop(){
suspend = true;stopped = true;
synchronized (lock){
lock.notifyAll();
}
}
public void resume(){
suspend = false;
synchronized (lock){
lock.notifyAll();
}
}
}
}
Please note that "do work" should not be blocking.
Short answer: no. You don't get to implement a thread scheduler in Java, as it doesn't operate at a low enough level.
If you really do intend to implement a process scheduler, I would expect you to need to hook into the underlying operating system calls, and as such I doubt this will ever be a good idea (if remotely possible) in Java. At the very least, you wouldn't be able to use java.lang.Thread to represent the running threads so it may as well all be done in a lower-level language like C.