Consider code sniper below:
package sync;
public class LockQuestion {
private String mutable;
public synchronized void setMutable(String mutable) {
this.mutable = mutable;
}
public String getMutable() {
return mutable;
}
}
At time Time1 thread Thread1 will update ‘mutable’ variable. Synchronization is needed in setter in order to flush memory from local cache to main memory.
At time Time2 ( Time2 > Time1, no thread contention) thread Thread2 will read value of mutable.
Question is – do I need to put synchronized before getter? Looks like this won’t cause any issues - memory should be up to date and Thread2’s local cache memory should be invalidated&updated by Thread1, but I’m not sure.
Rather than wonder, why not just use the atomic references in java.util.concurrent?
(and for what it's worth, my reading of happens-before does not guarantee that Thread2 will see changes to mutable unless it also uses synchronized ... but I always get a headache from that part of the JLS, so use the atomic references)
It will be fine if you make mutable volatile, details in the "cheap read-write lock"
Are you absolutely sure that the getter will be called only after the setter is called? If so, you don't need the getter to be synchronized, since concurrent reads do not need to synchronized.
If there is a chance that get and set can be called concurrently then you definitely need to synchronize the two.
If you worry so much about the performance in the reading thread, then what you do is read the value once using proper synchronization or volatile or atomic references. Then you assign the value to a plain old variable.
The assign to the plain variable is guaranteed to happen after the atomic read (because how else could it get the value?) and if the value will never be written to by another thread again you are all set.
I think you should start with something which is correct and optimise later when you know you have an issue. I would just use AtomicReference unless a few nano-seconds is too long. ;)
public static void main(String... args) {
AtomicReference<String> ars = new AtomicReference<String>();
ars.set("hello");
long start = System.nanoTime();
int runs = 1000* 1000 * 1000;
int length = test(ars, runs);
long time = System.nanoTime() - start;
System.out.printf("get() costs " + 1000*time / runs + " ps.");
}
private static int test(AtomicReference<String> ars, int runs) {
int len = 0;
for (int i = 0; i < runs; i++)
len = ars.get().length();
return len;
}
Prints
get() costs 1219 ps.
ps is a pico-second, with is 1 millionth of a micro-second.
This probably will never result in incorrect behavior, but unless you also guarantee the order that the threads startup in, you cannot necessarily guarantee that the compiler didn't reorder the read in Thread2 before the write in Thread1. More specifically, the entire Java runtime only has to guarantee that threads execute as if they were run in serial. So, as long as the thread has the same output running serially under optimizations, the entire language stack (compiler, hardware, language runtime) can do
pretty much whatever it wants. Including allowing Thread2 to cache the the result of LockQuestion.getMutable().
In practice, I would be very surprised if that ever happened. If you want to guarantee that this doesn't happen, have LockQuestion.mutable be declared as final and get initialized in the constructor. Or use the following idiom:
private static class LazySomethingHolder {
public static Something something = new Something();
}
public static Something getInstance() {
return LazySomethingHolder.something;
}
Related
I was thinking about how to solve race condition between two threads which tries to write to the same variable using immutable objects and without helping any keywords such as synchronize(lock)/volatile in java.
But I couldn't figure it out, is it possible to solve this problem with such solution at all?
public class Test {
private static IAmSoImmutable iAmSoImmutable;
private static final Runnable increment1000Times = () -> {
for (int i = 0; i < 1000; i++) {
iAmSoImmutable.increment();
}
};
public static void main(String... args) throws Exception {
for (int i = 0; i < 10; i++) {
iAmSoImmutable = new IAmSoImmutable(0);
Thread t1 = new Thread(increment1000Times);
Thread t2 = new Thread(increment1000Times);
t1.start();
t2.start();
t1.join();
t2.join();
// Prints a different result every time -- why? :
System.out.println(iAmSoImmutable.value);
}
}
public static class IAmSoImmutable {
private int value;
public IAmSoImmutable(int value) {
this.value = value;
}
public IAmSoImmutable increment() {
return new IAmSoImmutable(++value);
}
}
If you run this code you'll get different answers every time, which mean a race condition is happening.
You can not solve race condition without using any of existence synchronisation (or volatile) techniques. That what they were designed for. If it would be possible there would be no need of them.
More particularly your code seems to be broken. This method:
public IAmSoImmutable increment() {
return new IAmSoImmutable(++value);
}
is nonsense for two reasons:
1) It makes broken immutability of class, because it changes object's variable value.
2) Its result - new instance of class IAmSoImmutable - is never used.
The fundamental problem here is that you've misunderstood what "immutability" means.
"Immutability" means — no writes. Values are created, but are never modified.
Immutability ensures that there are no race conditions, because race conditions are always caused by writes: either two threads performing writes that aren't consistent with each other, or one thread performing writes and another thread performing reads that give inconsistent results, or similar.
(Caveat: even an immutable object is effectively mutable during construction — Java creates the object, then populates its fields — so in addition to being immutable in general, you need to use the final keyword appropriately and take care with what you do in the constructor. But, those are minor details.)
With that understanding, we can go back to your initial sentence:
I was thinking about how to solve race condition between two threads which tries to write to the same variable using immutable objects and without helping any keywords such as synchronize(lock)/volatile in java.
The problem here is that you actually aren't using immutable objects: your entire goal is to perform writes, and the entire concept of immutability is that no writes happen. These are not compatible.
That said, immutability certainly has its place. You can have immutable IAmSoImmutable objects, with the only writes being that you swap these objects out for each other. That helps simplify the problem, by reducing the scope of writes that you have to worry about: there's only one kind of write. But even that one kind of write will require synchronization.
The best approach here is probably to use an AtomicReference<IAmSoImmutable>. This provides a non-blocking way to swap out your IAmSoImmutable-s, while guaranteeing that no write gets silently dropped.
(In fact, in the special case that your value is just an integer, the JDK provides AtomicInteger that handles the necessary compare-and-swap loops and so on for threadsafe incrementation.)
Even if the problems are resolved by :
Avoiding the change of IAmSoImmutable.value
Reassigning the new object created within increment() back into the iAmSoImmutable reference.
There still are pieces of your code that are not atomic and that needs a sort of synchronization.
A solution would be to use a synchronized method of course
public synchronized static void increment() {
iAmSoImmutable = iAmSoImmutable.increment();
}
Thread t1 = new Thread(() -> {
for (int i = 0; i < 1000; i++) {
increment();
}
});
Thread t2 = new Thread(() -> {
for (int i = 0; i < 1000; i++) {
increment();
}
});
I am trying to see how volatile works here. If I declare cc as volatile, I get the output below. I know thread execution output varies from time to time, but I read somewhere that volatile is the same as synchronized, so why do I get this output? And if I use two instances of Thread1 does that matter?
2Thread-0
2Thread-1
4Thread-1
3Thread-0
5Thread-1
6Thread-0
7Thread-1
8Thread-0
9Thread-1
10Thread-0
11Thread-1
12Thread-0
public class Volexample {
int cc=0;
public static void main(String[] args) {
Volexample ve=new Volexample();
CountClass count =ve.new CountClass();
Thread1 t1=ve.new Thread1(count);
Thread2 t2=ve.new Thread2(count);
t1.start();
t2.start();
}
class Thread1 extends Thread{
CountClass count =new CountClass();
Thread1(CountClass count ){
this.count=count;
}
#Override
public void run() {
/*for(int i=0;i<=5;i++)
count.countUp();*/
for(int i=0;i<=5;i++){
cc++;
System.out.println(cc + Thread.currentThread().getName());
}
}
}
class Thread2 extends Thread {
CountClass count =new CountClass();
Thread2(CountClass count ){
this.count=count;
}
#Override
public void run(){
/*for(int i=0;i<=5;i++)
count.countUp();*/
for(int i=0;i<=5;i++){
cc++;
System.out.println(cc + Thread.currentThread().getName());
}
}
}
class CountClass{
volatile int count=0;
void countUp(){
count++;
System.out.println(count + Thread.currentThread().getName());
}
}
}
In Java, the semantics of the volatile keyword are very well defined. They ensure that other threads will see the latest changes to a variable. But they do not make read-modify-write operations atomic.
So, if i is volatile and you do i++, you are guaranteed to read the latest value of i and you are guaranteed that other threads will see your write to i immediately, but you are not guaranteed that two threads won't interleave their read/modify/write operations so that the two increments have the effect of a single increment.
Suppose i is a volatile integer whose value was initialized to zero, no writes have occurred other than that yet, and two threads do i++;, the following can happen:
The first thread reads a zero, the latest value of i.
The second threads reads a zero, also the latest value of i.
The first thread increments the zero it read, getting one.
The second thread increments the zero it read, also getting one.
The first thread writes the one it computed to i.
The second thread writes the one it computed to i.
The latest value written to i is one, so any thread that accesses i now will see one.
Notice that an increment was lost, even though every thread always read the latest value written by any other thread. The volatile keyword gives visibility, not atomicity.
You can use synchronized to form complex atomic operations. If you just need simple ones, you can use the various Atomic* classes that Java provides.
A use-case for using volatile would be reading/writing from memory that is mapped to device registers, for example on a micro-controller where something other than the CPU would be reading/writing values to that "memory" address and so the compiler should not optimise that variable away .
The Java volatile keyword is used to mark a Java variable as "being stored in main memory". That means, that every read of a volatile variable will be read from the computer's main memory, and not from the cache and that every write to a volatile variable will be written to main memory, and not just to the cache.
It guarantees that you are accessing the newest value of this variable.
P. S. Use larger loops to notice bugs. For example try to iterate 10e9 times.
I have a class something like this:
public class Outer {
public static final TaskUpdater TASK_UPDATER = new TaskUpdater() {
public void doSomething(Task task) {
//uses and modifies task and some other logic
}
};
public void taskRelatedMethod() {
//some logic
TASK_UPDATER.doSomething(new Task());
//some other logic
}
}
I've noticed some strange behaviour when running this in a multi-threaded environment that I can't reproduce locally, and I suspect it's a threading issue. Is it possible for two instances of Outer to somehow interfere with each other by both calling doSomething on TASK_UPDATER? Each will be passing a difference instance of Task into the doSomething method.
Is it possible for two instances of Outer to somehow interfere with each other by both calling doSomething on TASK_UPDATER?
The answer is "it depends". Any time you have multiple threads sharing the same object instances, you may have concurrency issues. In your case, you have multiple instances of Outer sharing the same static TaskUpdater instance. This in itself is not a problem however if TaskUpdater has any fields, they will be shared by the threads. If the threads make any changes to those fields in any way then data synchronization needs to happen and possible blocking around critical code sections. If the TaskUpdater is only reading and operating on the Task argument, which seems to be per Outer instance, then there is no problem.
For example, you could have a task updater like:
public static final TaskUpdater TASK_UPDATER = new TaskUpdater() {
public void doSomething(Task task) {
int total = 0;
for (Job job : task.getJobs() {
total += job.getSize();
}
task.setTotalSize(total);
}
};
In this case, the task is only changing the Task instance passed in. It can use local variables without a problem because those are on the stack and now shared between threads. This is thread safe.
However consider this updater:
public static final TaskUpdater TASK_UPDATER = new TaskUpdater() {
private long total = 0;
public void doSomething(Task task) {
for (Job job : task.getJobs() {
// race condition and memory synchronization issues here
total += job.getSize();
}
}
public long getTotal() {
return total;
}
};
In this case, both threads will be updating the same total field on the shard TaskUpdater. This is not thread safe since you have race conditions around the += (since it is 3 operations: get, plus, set) as well as memory synchronization issues. One thread may have a cached version of total which is 5 which it increments to 6 but another thread has already incremented its cached version of total to 10.
When threads share common fields you need to protect those operations and worry about synchronization in terms of mutex access and memory publishing. In this case, making total be an AtomicLong will be in order.
private AtomicLong total = new AtomicLong(0);
...
total.addAndGet(job.getSize());
AtomicLong wraps a volatile long so the memory is published appropriately to all threads and it has code that does atomic test/set operations which removes the race conditions.
Hi I have one synchronized method which returns ms. Can anyone tell whether where each object will get the unique value in below code.
public static synchronized Long generateIdforDCR()
{
int val= return System.nanoTime();
}
Call will be in another class like
forloop 1... 1000
{
ClassName cn=new ClassName();
cn.generateIdforDCR();
}
Will i get unique value always.
No - there's no guarantee that each call will return a different value. It's not inconceivable that the call (including synchronization) could take less time than the granularity of the internal clock used for nanoTime(). (Indeed, I can see this happen on my laptop.)
It sounds like you should just be using an AtomicLong instead:
private static final AtomicLong counter = new AtomicLong();
public static Long generateIdforDCR() {
return counter.incrementAndGet();
}
That will give you a unique number (if you call it fewer than 264 times) within that run. If you need it to be unique within a larger scope (e.g. across multiple sequential runs, or potentially multiple concurrent runs of different processes) then you'll need a slightly different approach.
In the "Effective Java", the author mentioned that
while (!done) i++;
can be optimized by HotSpot into
if (!done) {
while (true) i++;
}
I am very confused about it. The variable done is usually not a const, why can compiler optimize that way?
The author assumes there that the variable done is a local variable, which does not have any requirements in the Java Memory Model to expose its value to other threads without synchronization primitives. Or said another way: the value of done won't be changed or viewed by any code other than what's shown here.
In that case, since the loop doesn't change the value of done, its value can be effectively ignored, and the compiler can hoist the evaluation of that variable outside the loop, preventing it from being evaluated in the "hot" part of the loop. This makes the loop run faster because it has to do less work.
This works in more complicated expressions too, such as the length of an array:
int[] array = new int[10000];
for (int i = 0; i < array.length; ++i) {
array[i] = Random.nextInt();
}
In this case, the naive implementation would evaluate the length of the array 10,000 times, but since the variable array is never assigned and the length of the array will never change, the evaluation can change to:
int[] array = new int[10000];
for (int i = 0, $l = array.length; i < $l; ++i) {
array[i] = Random.nextInt();
}
Other optimizations also apply here unrelated to hoisting.
Hope that helps.
Joshua Bloch's "Effective Java" explains why you must be careful when sharing variables between threads. If there doesn't exist any explicit happens before relation between threads, the HotSpot compiler is allowed to optimize the code for speed reasons as shown by dmide.
Most nowadays microprocessors offer different kinds of out-of-order strategies. This leads to a weak consistency model which is also the base for Java's Platform Memory Model. The idea behind is, as long as the programmer does not explicitly express the need for an inter-thread coordination, the processor and the compiler can do different optimizations.
The two keywords volatile (atomicity & visibility) and synchronized (atomicity & visibility & mutual exclusion) are used for expressing the visibility of changes (for other threads). However, in addition you must know the happens before rules (see Goetz et al “Java Concurrency in Practice” p. 341f (JCP) and Java Language Specification §17).
So, what happens when System.out.println() is called? See above.
First of all, you need two System.out.println() calls. One in the main method (after changing done) and one in the started thread (in the while loop). Now, we must consider the program order rule and the monitor lock rule from JLS §17. Here the short version: You have a common lock object M. Everything that happens in a thread A before A unlocks M is visible to another thread B in that moment when B locks M (see JCP).
In our case the two threads share a common PrintStream object in System.out. When we take a look inside println() you see a call of synchronized(this).
Conclusion: Both threads share a common lock M which is locked and unlocked. System.out.println() “flushes” the state change of variable done.
public class StopThread {
private static boolean stopRequested;
private static synchronized void requestStop() {
stopRequested = true;
}
private static synchronized boolean stopRequested() {
return stopRequested;
}
public static void main(String[] args)
throws InterruptedException {
Thread backgroundThread = new Thread(new Runnable() {
public void run() {
int i = 0;
while (!stopRequested())
i++;
}
});
backgroundThread.start();
TimeUnit.SECONDS.sleep(1);
requestStop();
}
}
the above code is right in effective code,it is equivalent that use volatile to decorate the stopRequested.
private static boolean stopRequested() {
return stopRequested;
}
If this method omit the synchronized keyword, this program isn't working well.
I think that this change cause the hoisting when the method omit the synchronized keyword.
If you add System.out.println("i = " + i); in the while loop. The hoisting won't work, meaning the program stops as expected. The println method is thread safe so that the jvm can not optimize the code segment?