What are the possible ways to make code thread-safe without using the synchronized keyword?
Actually, lots of ways:
No need for synchronization at all if you don't have mutable state.
No need for synchronization if the mutable state is confined to a single thread. This can be done by using local variables or java.lang.ThreadLocal.
You can also use built-in synchronizers. java.util.concurrent.locks.ReentrantLock has the same functionality as the lock you access when using synchronized blocks and methods, and it is even more powerful.
Only have variables/references local to methods. Or ensure that any instance variables are immutable.
You can make your code thread-safe by making all the data immutable, if there is no mutability, everything is thread-safe.
Secondly, you may want to have a look at java concurrent API which has provision for providing read / write locks which perform better in case there are lots of readers and a few writers. Pure synchronized keyword will block two readers also.
////////////FIRST METHOD USING SINGLE boolean//////////////
public class ThreadTest implements Runnable {
ThreadTest() {
Log.i("Ayaz", "Constructor..");
}
private boolean lockBoolean = false;
public void run() {
Log.i("Ayaz", "Thread started.." + Thread.currentThread().getName());
while (lockBoolean) {
// infinite loop for other thread if one is accessing
}
lockBoolean = true;
synchronizedMethod();
}
/**
* This method is synchronized without using synchronized keyword
*/
public void synchronizedMethod() {
Log.e("Ayaz", "processing...." + Thread.currentThread().getName());
try {
Thread.currentThread().sleep(3000);
} catch (Exception e) {
System.out.println("Exp");
}
Log.e("Ayaz", "complete.." + Thread.currentThread().getName());
lockBoolean = false;
}
} //end of ThreadTest class
//For testing use below line in main method or in Activity
ThreadTest threadTest = new ThreadTest();
Thread threadA = new Thread(threadTest, "A thead");
Thread threadB = new Thread(threadTest, "B thead");
threadA.start();
threadB.start();
///////////SECOND METHOD USING TWO boolean/////////////////
public class ThreadTest implements Runnable {
ThreadTest() {
Log.i("Ayaz", "Constructor..");
}
private boolean isAnyThreadInUse = false;
private boolean lockBoolean = false;
public void run() {
Log.i("Ayaz", "Thread started.." + Thread.currentThread().getName());
while (!lockBoolean)
if (!isAnyThreadInUse) {
isAnyThreadInUse = true;
synchronizedMethod();
lockBoolean = true;
}
}
/**
* This method is synchronized without using synchronized keyword
*/
public void synchronizedMethod() {
Log.e("Ayaz", "processing...." + Thread.currentThread().getName());
try {
Thread.currentThread().sleep(3000);
} catch (Exception e) {
System.out.println("Exp");
}
Log.e("Ayaz", "complete.." + Thread.currentThread().getName());
isAnyThreadInUse = false;
}
} // end of ThreadTest class
//For testing use below line in main method or in Activity
ThreadTest threadTest = new ThreadTest();
Thread t1 = new Thread(threadTest, "a thead");
Thread t2 = new Thread(threadTest, "b thead");
t1.start();
t2.start();
To maintain predictability you must either ensure all access to mutable data is made sequentially or handle the issues caused by parallel access.
The most gross protection uses the synchronized keyword. Beyond that there are at least two layers of possibility, each with their benefits.
Locks/Semaphores
These can be very effective. For example, if you have a structure that is read by many threads but only updated by one you may find a ReadWriteLock useful.
Locks can be much more efficient if you choose your lock to match the algorithm.
Atomics
Use of AtomicReference for example can often provide completely lock free functionality. This can usually provide huge benefits.
The reasoning behind atomics is to allow them to fail but to tell you they failed in a way you can handle it.
For example, if you want to change a value you can read it and then write its new value so long as it is still the old value. This is called a "compare and set" or cas and can usually be implemented in hardware and so is extremely efficient. All you then need is something like:
long old = atomic.get();
while ( !atomic.cas(old, old+1) ) {
// The value changed between my get and the cas. Get it again.
old = atomic.get();
}
Note, however, that predictability is not always the requirement.
Well there are many ways you can achieve this, but each contains many flavors. Java 8 also ships with new concurrency features.
Some ways you could make sure thread safety are:
Semaphores
Locks-Reentrantlock,ReadWriteLock,StampedLock(Java 8)
Why do u need to do it?
Using only local variable/references will not solve most of the complex business needs.
Also, if instance variable are immutable, their references can still be changed by other threads.
One option is use something like a SingleThreadModel, but it is highly discouraged and deprecated.
u can also look at concurrent api as suggested above by Kal
Related
This question already has answers here:
Why doesnt this Java loop in a thread work?
(4 answers)
Closed 3 years ago.
For a recent library I'm writing, I wrote a thread which loops indefinitely. In this loop, I start with a conditional statement checking a property on the threaded object. However it seems that whatever initial value the property has, will be what it returns even after being updated.
Unless I do some kind of interruption such as Thread.sleep or a print statement.
I'm not really sure how to ask the question unfortunately. Otherwise I would be looking in the Java documentation. I have boiled down the code to a minimal example that explains the problem in simple terms.
public class App {
public static void main(String[] args) {
App app = new App();
}
class Test implements Runnable {
public boolean flag = false;
public void run() {
while(true) {
// try {
// Thread.sleep(1);
// } catch (InterruptedException e) {}
if (this.flag) {
System.out.println("True");
}
}
}
}
public App() {
Test t = new Test();
Thread thread = new Thread(t);
System.out.println("Starting thread");
thread.start();
try {
Thread.sleep(1000);
} catch (InterruptedException e) {}
t.flag = true;
System.out.println("New flag value: " + t.flag);
}
}
Now, I would presume that after we change the value of the flag property on the running thread, we would immediately see the masses of 'True' spitting out to the terminal. However, we don't..
If I un-comment the Thread.sleep lines inside the thread loop, the program works as expected and we see the many lines of 'True' being printed after we change the value in the App object. As an addition, any print method in place of the Thread.sleep also works, but some simple assignment code does not. I assume this is because it is pulled out as un-used code at compile time.
So, my question is really: Why do I have to use some kind of interruption to get the thread to check conditions correctly?
So, my question is really: Why do I have to use some kind of interruption to get the thread to check conditions correctly?
Well you don't have to. There are at least two ways to implement this particular example without using "interruption".
If you declare flag to be volatile, then it will work.
It will also work if you declare flag to be private, write synchronized getter and setter methods, and use those for all accesses.
public class App {
public static void main(String[] args) {
App app = new App();
}
class Test implements Runnable {
private boolean flag = false;
public synchronized boolean getFlag() {
return this.flag;
}
public synchronized void setFlag(boolean flag) {
return this.flag = flag;
}
public void run() {
while(true) {
if (this.getFlag()) { // Must use the getter here too!
System.out.println("True");
}
}
}
}
public App() {
Test t = new Test();
Thread thread = new Thread(t);
System.out.println("Starting thread");
thread.start();
try {
Thread.sleep(1000);
} catch (InterruptedException e) {}
t.setFlag(true);
System.out.println("New flag value: " + t.getFlag());
}
But why do you need to do this?
Because unless you use either a volatile or synchronized (and you use synchronized correctly) then one thread is not guaranteed to see memory changes made by another thread.
In your example, the child thread does not see the up-to-date value of flag. (It is not that the conditions themselves are incorrect or "don't work". They are actually getting stale inputs. This is "garbage in, garbage out".)
The Java Language Specification sets out precisely the conditions under which one thread is guaranteed to see (previous) writes made by another thread. This part of the spec is called the Java Memory Model, and it is in JLS 17.4. There is a more easy to understand explanation in Java Concurrency in Practice by Brian Goetz et al.
Note that the unexpected behavior could be due to the JIT deciding to keep the flag in a register. It could also be that the JIT compiler has decided it does not need force memory cache write-through, etcetera. (The JIT compiler doesn't want to force write-through on every memory write to every field. That would be a major performance hit on multi-core systems ... which most modern machines are.)
The Java interruption mechanism is yet another way to deal with this. You don't need any synchronization because the method calls that. In addition, interruption will work when the thread you are trying to interrupt is currently waiting or blocked on an interruptible operation; e.g. in an Object::wait call.
Because the variable is not modified in that thread, the JVM is free to effectively optimize the check away. To force an actual check, use the volatile keyword:
public volatile boolean flag = false;
I've this kind of code:
public class RecursiveQueue {
//#Inject
private QueueService queueService;
public static void main(String[] args) {
RecursiveQueue test = new RecursiveQueue();
test.enqueue(new Node("X"), true);
test.enqueue(new Node("Y"), false);
test.enqueue(new Node("Z"), false);
}
private void enqueue(final Node node, final boolean waitTillFinished) {
final AtomicLong totalDuration = new AtomicLong(0L);
final AtomicInteger counter = new AtomicInteger(0);
AfterCallback callback= new AfterCallback() {
#Override
public void onFinish(Result result) {
for(Node aNode : result.getChildren()) {
counter.incrementAndGet();
queueService.requestProcess(aNode, this);
}
totalDuration.addAndGet(result.getDuration());
if(counter.decrementAndGet() <= 0) { //last one
System.out.println("Processing of " + node.toString() + " has finished in " + totalDuration.get() + " ms");
if(waitTillFinished) {
counter.notify();
}
}
}
};
counter.incrementAndGet();
queueService.requestProcess(node, callback);
if(waitTillFinished) {
try {
counter.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
Imagine there is a queueService which uses blocking queue and few consumer threads to process nodes = calls DAO to fetch children of nodes (it's a tree).
So requestProcess method just enqueues the node and does not block.
Is there some better/safe way to avoid using wait/notify in this sample ?
According to some findings I can use Phaser (but I work on java 6) or conditions (but I'm not using locks).
There is no synchronized anything in your example. You mustn't call o.wait() or o.notify() except from within a synchronized(o) {...} block.
Your call to wait() is not in a loop. This may not ever happen in your JVM, but the language spec permits wait() to return prematurely (that's known as a spurious wakeup) More generally, it is good practice to always use a loop because it's a familiar design pattern. A while statement costs no more than an if, and you should have it because of the possibility of spurious wakeup, and you'd absolutely must have it in a multi-consumer situation, and so you might as well just always write it that way.
Since you must use synchronized blocks in order to use wait() and notify(), there probably is no reason to use Atomic anything.
This "recursive" thing seems awfully complicated, what with the callback adding more items to the queue. How deep can that go?
I think you are looking for CountDownLatch.
You actually use locks or, let's put it this way, you should be using them if you try to use wait/notify as James pointed out. As you are bound to Java 1.6 and ForkJoin or Phaser are not available to you the choice is either implementing wait/notify properly or using Condition with explicit lock. This would be a matter of your personal preferences.
Another alternative is to try and restructure your algorithm so you first get to know the entire set of steps you would need to execute. It is not always possible though.
I want to clear my understanding that if I surround a block of code with synchronized(this){} statement, does this mean that I am making those statements atomic?
No, it does not ensure your statements are atomic. For example, if you have two statements inside one synchronized block, the first may succeed, but the second may fail. Hence, the result is not "all or nothing". But regarding multiple threads, you ensure that no statement of two threads are interleaved. In other words: all statements of all threads are strictly serialized, even so, there is no guarantee, that all or none statements of a thread gets executed.
Have a look at how Atomicity is defined.
Here is an example showing that the reader is able to ready a corrupted state. Hence the synchronized block was not executed atomically (forgive me the nasty formatting):
public class Example {
public static void sleep() {
try { Thread.sleep(400); } catch (InterruptedException e) {};
}
public static void main(String[] args) {
final Example example = new Example(1);
ExecutorService executor = newFixedThreadPool(2);
try {
Future<?> reader = executor.submit(new Runnable() { #Override public void run() {
int value; do {
value = example.getSingleElement();
System.out.println("single value is: " + value);
} while (value != 10);
}});
Future<?> writer = executor.submit(new Runnable() { #Override public void run() {
for (int value = 2; value < 10; value++) example.failDoingAtomic(value);
}});
reader.get(); writer.get();
} catch (Exception e) { e.getCause().printStackTrace();
} finally { executor.shutdown(); }
}
private final Set<Integer> singleElementSet;
public Example(int singleIntValue) {
singleElementSet = new HashSet<>(Arrays.asList(singleIntValue));
}
public synchronized void failDoingAtomic(int replacement) {
singleElementSet.clear();
if (new Random().nextBoolean()) sleep();
else throw new RuntimeException("I failed badly before adding the new value :-(");
singleElementSet.add(replacement);
}
public int getSingleElement() {
return singleElementSet.iterator().next();
}
}
No, synchronization and atomicity are two different concepts.
Synchronization means that a code block can be executed by at most one thread at a time, but other threads (that execute some other code that uses the same data) can see intermediate results produced inside the "synchronized" block.
Atomicity means that other threads do not see intermediate results - they see either the initial or the final state of the data affected by the atomic operation.
It's unfortunate that java uses synchronized as a keyword. A synchronized block in Java is a "mutex" (short for "mutual exclusion"). It's a mechanism that insures only one thread at a time can enter the block.
Mutexes are just one of many tools that are used to achieve "synchronization" in a multi-threaded program: Broadly speaking, synchronization refers to all of the techniques that are used to insure that the threads will work in a coordinated fashion to achieve a desired outcome.
Atomicity is what Oleg Estekhin said, above. We usually hear about it in the context of "transactions." Mutual exclusion (i.e., Java's synchronized) guarantees something less than atomicity: Namely, it protects invariants.
An invariant is any assertion about the program's state that is supposed to be "always" true. E.g., in a game where players exchange virtual coins, the total number of coins in the game might be an invariant. But it's often impossible to advance the state of the program without temporarily breaking the invariant. The purpose of mutexes is to insure that only one thread---the one that is doing the work---can see the temporary "broken" state.
For code that use syncronized on that object - yes.
For code, that don't use syncronized keyword for that object - no.
Can we say that by synchronizing a block of code we are making the contained statements atomic?
You are taking a very big leap there. Atomicity means that the operation if atomic will complete in one CPU cycle or equivalent to one CPU cycle whereas Synchronizing a block means only one thread can access the critical region. It may take multiple CPU cycles for processing code in the critical region(which will make it non atomic).
I have a program that holds many threads, lets put as an example six threads. Five of them should be able to use concurrently a given resource but the last thread shouldn't if a given condition occurs and should wait until that condition is over.
In my understanding a ReentrantLock can't be used because it can only be held by one thread at a time. In the other hand a Semaphore can be held by many threads at a time but I can't find a way to attach the condition to aquire method.
Can This high level objects do the trick or I will have to implement this functionality using notify and wait directly?
Eg.
class A{
getResource{ ... }
}
//This Runable could be spawn many times at the same time
class B implements Runnable{
run {
setConditionToTrue
getResource
...
getResource
...
getResource
setConditionToFalse
}
}
//This will be working forever but only one Thread
class C implements Runnable{
run{
loop{
if(Condition == true) wait
getResource
}
}
}
Thanks in advance pals
I am restating your problem here: You want your B threads to access the shared resource concurrently, but your C thread should wait for some condition to occur before using the resource.
If I understand your question correctly, You can use ReentrantLock to solve your problem.
Introduce a new function called getAccess() and make the C thread call this function to get the shared resource. Introduce two more functions to allow and stop the access to shared resource.
class A {
private final ReentrantLock lock = new ReentrantLock();
private Condition someCondition = lock.newCondition();
private boolean bCondition = false;
getResource{ ... } // Your existing method used by B threads
getAccess() { // Protected access to some resource, called by C thread
lock.acquire();
try {
if (!bCondition)
someCondition.await(); // B thread will wait here but releases the lock
} finally {
lock.release();
}
}
allowAccess() { // B thread can call this func to notify C and allow access
lock.acquire();
try {
bCondition = true;
someCondition.signal(); // Decided to release the resource
} finally {
lock.release();
}
}
stopAccess() { // B thread can stop the access
lock.acquire();
try {
bCondition = false;
} finally {
lock.release();
}
}
}
If you want several threads to share a resource, you need to be more specific about the meaning of that sharing. Normally this means differentiating between those threads that read the current value of the resource and other threads that change the value. This implies is a concurrent-read / exclusive-write pattern ('crew') if the writes are to be race-free and stable.
In the Java API, this is provided by the ReentrantReadWriteLock. There are also other alternatives worthy of consideration, such as the carefully-implemented Crew in JCSP.
Using [J]CSP, a different pattern is also available: that of wrapping the common resource in its own thread and providing access to it via a shared JCSP channel from all the client threads. This client-server pattern is easy to understand and implement, and has the added benefit that it is formally deadlock-free, given that the thread communication graph is acyclic (more).
This is an implementation of readers writers, i.e. many readers can read but only one writer can write at any one time. Does this work as expected?
public class ReadersWriters extends Thread{
static int num_readers = 0;
static int writing = 0;
public void read_start() throws InterruptedException {
synchronized(this.getClass()) {
while(writing == 1) wait();
num_readers++;
}
}
public void read_end() {
synchronized(this.getClass()) {
if(--num_readers == 0) notifyAll();
}
}
public void write_start() throws InterruptedException{
synchronized(this.getClass()) {
while(num_readers > 0) wait();
writing = 1;
}
}
public void write_end() {
this.getClass().notifyAll();
}
}
Also is this implementation any different from declaring each method
public static synchronized read_start()
for example?
Thanks
No - you're implicitly calling this.wait(), despite not having synchronized on this, but instead on the class. Likewise you're calling this.notifyAll() in read_end. My suggestions:
Don't extend Thread - you're not specializing the thread at all.
Don't use static variables like this from instance members; it makes it look like there's state on a per-object basis, but actually there isn't. Personally I'd just use instance variables.
Don't use underscores in names - the conventional Java names would be numReaders, readEnd (or better, endRead) etc.
Don't synchronize on either this or the class if you can help it. Personally I prefer to have a private final Object variable to lock on (and wait etc). That way you know that only your code can be synchronizing on it, which makes it easier to reason about.
You never set writing to 0. Any reason for using an integer instead of a boolean in the first place?
Of course, it's better to use the classes in the framework for this if at all possible - but I'm hoping you're really writing this to understand threading better.
You can achieve your goal in much simpler way by using
java.util.concurrent.locks.ReentrantReadWriteLock
Just grab java.util.concurrent.locks.ReentrantReadWriteLock.ReadLock when you start reading and java.util.concurrent.locks.ReentrantReadWriteLock.WriteLock when you start writing.
This class is intended exactly for that - allow multiple readers that are mutually exclusive with single writer.
Your particular implementation of read_start is not equivalent to simply declaring the method synchronized. As was noted by J. Skeed, you need to call notify (and wait) on the object you are synchronize-ing with. You cannot use an unrelated object (here: the class) for this. And using the synchronized modified on a method does not make the method implicitly call wait or anything like that.
There is, BTW., an implementation of read/write locks, which ships with the core JDK: java.util.concurrent.locks.ReentrantReadWriteLock. Using that one, your code might look like the following instead:
class Resource {
private final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
private final Lock rlock = lock.readLock();
private final Lock wlock = lock.writeLock();
void read() { ... /* caller has to hold the read lock */ ... }
void write() { ... /* caller has to hold the write lock */ ... }
Lock readLock() { return rlock; }
Lock writeLock() { return wlock; }
}
Usage
final Resource r = ...;
r.readLock().lock();
try {
r.read();
} finally {
r.unlock();
}
and similar for the write operation.
The example code synchronizes on this.getClass(), which will return the same Class object for multiple instances of ReadersWriters in the same class loader. If multiple instances of ReadersWriters exist, even though you have multiple threads, there will be contention for this shared lock. This would be similar to adding the static keyword to a private lock field (as Jon Skeet suggested) and would likely lead to worse performance than synchronizing on this or a private lock object. More specifically, one thread which is reading would be blocking another thread which is writing, and this is likely undesirable.