long and double assignments are not atomic - How does it matter? - java

We know that long and double assignments are not atomic in Java until they are declared volatile. My question is how does it really matter in our programming practice.
for instance if you the see below classes whose objects are being shared among multiple threads.
/**
* The below class is not thread safe. the assignments to int values would be
* atomic but at the same time it not guaranteed that changes would be visible to
* other threads.
**/
public final class SharedInt {
private int value;
public void setValue(int value) {
this.value = value;
}
public int getValue() {
return this.value;
}
}
Now consider another SharedLong
/**
* The below class is not thread safe because here the assignments to long
* are not atomic as well as changes are not
* guaranteed to be visible to other threads.
*/
public final class SharedLong {
private long value;
public void setValue(long value) {
this.value = value;
}
public long getValue() {
return this.values;
}
}
Now we can see the both of the above versions are not thread safe. In case of int, it is because threads may see stale values of integer. While in case if long, they can see corrupt as well as stale values of long variable.
In both cases, if an instance is not shared among multiple threads, then the classes are safe.
To make the above classes thread safe we need to declare int and long both to be volatile or make the method synchronized.
This make me wonder: How does it really matter if assignments to long and double are not atomic during our normal course of programming because both need to be declared volatile or synchronized for multithreaded access so my Questions is What are the scenarios where the fact that long assignments are not atomic may make a difference?.

I made a cool little example of this a while ago
public class UnatomicLong implements Runnable {
private static long test = 0;
private final long val;
public UnatomicLong(long val) {
this.val = val;
}
#Override
public void run() {
while (!Thread.interrupted()) {
test = val;
}
}
public static void main(String[] args) {
Thread t1 = new Thread(new UnatomicLong(-1));
Thread t2 = new Thread(new UnatomicLong(0));
System.out.println(Long.toBinaryString(-1));
System.out.println(pad(Long.toBinaryString(0), 64));
t1.start();
t2.start();
long val;
while ((val = test) == -1 || val == 0) {
}
System.out.println(pad(Long.toBinaryString(val), 64));
System.out.println(val);
t1.interrupt();
t2.interrupt();
}
// prepend 0s to the string to make it the target length
private static String pad(String s, int targetLength) {
int n = targetLength - s.length();
for (int x = 0; x < n; x++) {
s = "0" + s;
}
return s;
}
}
One thread constantly tries to assign 0 to test while the other tries to assign -1. Eventually you'll end up with a number that's either 0b1111111111111111111111111111111100000000000000000000000000000000 or 0b0000000000000000000000000000000011111111111111111111111111111111. (Assuming you aren't on a 64 bit JVM. Most, if not all, 64 bit JVMs will actually do atomic assignment for longs and doubles.)

Where improper programming with an int may result in stale values being observed, improper programming with a long may result in values that never actually existed being observed.
This could theoretically matter for a system that only needs to be eventually-correct and not point-in-time correct, so skipped synchronization for performance. Although skipping a volatile field declaration in the interest of performance seems on casual inspection like foolishness.

It makes a difference if SharedInt or SharedLong are going to be accessed simultaneously. As you said, one thread may read a stale int, or a stale or corrupted long.
This could be important if the value was being used to reference an array.
Or with display in a GUI.
How about writing some values over a network and sending bad data. Now clients are confused or crashing.
Incorrect values could be stored to a database.
Repeated calculations could be corrupted...
As you requested in comments, For long specifically:
Long values are frequently used for time calculations. This could throw off loops where you are waiting for an amount of time before performing some operation, such as a heartbeat in a networking app.
You could report to a client synchronizing clocks with you time was 80 years or 1000 years in the past.
Longs and ints are commonly used for bitpacked fields to indicate many different things. Your flags would be entirely corrupted.
Longs are used as unique ID's frequently. This could corrupt hash tables you're creating.
Obviously lots of bad, bad stuff could happen. If this value needs to be thread safe, and you want your software to be very reliable, declare these variables volatile, use an Atomic variable, or synchronize access and set methods.

Related

Is repeatedly trying to get locks a good solution to prevent deadlocks?

my question is about synchronisation and preventing deadlocks when using threads. In this example an object simply holds an integer variable and multiple threads call swapValue on those objects.
public class Data {
private long value;
public Data(long value) {
this.value = value;
}
public synchronized long getValue() {
return value;
}
public synchronized void setValue(long value) {
this.value = value;
}
public void swapValue(Data other) {
long temp = getValue();
long newValue = other.getValue();
setValue(newValue);
other.setValue(temp);
}
}
The swapValue method should be thread safe and should not skip swapping the values if the resources are not available. Simply using the synchronized keyword on the method signature will result in a deadlock. I came up with this (apparently) working solution, which is only based on the probability that one thread unlocks its resource and the other tries to claim it while the resource is still unlocked.
private Lock lock = new ReentrantLock();
...
public void swapValue(Data other) {
lock.lock();
while(!other.lock.tryLock())
{
lock.unlock();
lock.lock();
}
long temp = getValue();
long newValue = other.getValue();
setValue(newValue);
other.setValue(temp);
other.lock.unlock();
lock.unlock();
}
To me this looks like a hack. Is this a common solution for these kind of problems? Are there solutions that are "more deterministic" in their behaviour and also applicable in practice?
There are two issues at play here:
First, mixing Data.lock with the built-in lock used by the synchronized keyword
Second, inconsistent locking order among four (!) locks - this.lock, other.lock, the built-in lock of this, and the built-in lock of other
Even without synchronized, a.swapValue(b) and b.swapValue(a) can deadlock unless you use your approach to try to spin while locking and unlocking, which is inefficient.
One approach that you could take is add a field with some kind of final unique ID to each Data object - when swapping data of two objects, lock the one with a lower ID before the one with the higher ID, regardless of which is this and which is other. Note that System.identityHashCode is unfortunately not unique so it can't be easily used here.
The unlock ordering isn't critical here, but unlocking in the reverse order of locking is generally a good practice to follow where possible.
#Nanofarad has the right idea: Give every Data instance a unique, permanent numeric ID, and then use those IDs to decide which object to lock first. Here's what that might look like in practice:
private static void lockBoth(Data a, Data b) {
Lock first = a.lock;
Lock second = b.lock;
if (a.getID() < b.getID()) {
first = b.lock;
second = a.lock;
}
first.lock();
second.lock();
}
private static void unlockBoth(Data a, Data b) {
a.lock.unlock();
b.lock.unlock();
// Note: #Queeg suggests in comments below that in the general case,
// it would be good practice to make this routine always unlock the
// two locks in the order opposite to which `lockBoth()` locked them.
// See https://stackoverflow.com/a/8949355/801894 for an explanation.
}
public void swapValue(Data other) {
lockBoth(this, other);
...swap 'em...
unlockBoth(this, other);
}
In your case, just use AtomicInteger or AtomicLong instead inventing the wheel again. About the synchronization and deadlocks part of your question in general - DO NOT RELY ON PROBABILITY -- it is way too tricky and too easy to get it wrong, unless you're experienced mathematician knowing exactly what youre doing - but even then it is risky. One example when probability is used is UUID, but if computers will get fast enough then the code that shouldn't reasonably break till the end of universe can break in matter of milliseconds or faster, it is better to write code that do not rely on probability, especially concurrent code.

Why a synchronized getter work like a volatile read?

This program does not terminate!
public class Main extends Thread {
private int i = 0;
private int getI() {return i; }
private void setI(int j) {i = j; }
public static void main(String[] args) throws InterruptedException {
Main main = new Main();
main.start();
Thread.sleep(1000);
main.setI(10);
}
public void run() {
System.out.println("Awaiting...");
while (getI() == 0) ;
System.out.println("Done!");
}
}
I understand this happens because the CPU core running the Awaiting loop always sees the cached copy of i and misses the update.
I also understand that if I make volatileprivate int i = 0; then the while (getI()... will behave[1] as if every time it is consulting the main memory - so it will see the updated value and my program will terminate.
My question is: If I make
synchronized private int getI() {return i; }
It surprisingly works!! The program terminates.
I understand that synchronized is used in preventing two different threads from simultaneously entering a method - but here is only one thread that ever enters getI(). So what sorcery is this?
Edit 1
This (synchronization) guarantees that changes to the state of the object are visible to all threads
So rather than directly having the private state field i, I made following changes:
In place of private int i = 0; I did private Data data = new Data();, i = j changed to data.i = j and return i changed to return data.i
Now the getI and setI methods are not doing anything to the state of the object in which they are defined (and may be synchronized). Even now using the synchronized keyword is causing the program to terminate! The fun is in knowing that the object whose state is actually changing (Data) has no synchronization or anything built into it. Then why?
[1] It will probably just behave as that, what actually, really happens is unclear to me
It is just coincidence or platform dependent or specific JVM dependent, it is not guaranteed by JLS. So, do not depend on it.

How to solve race condition of two writers using immutable objects

I was thinking about how to solve race condition between two threads which tries to write to the same variable using immutable objects and without helping any keywords such as synchronize(lock)/volatile in java.
But I couldn't figure it out, is it possible to solve this problem with such solution at all?
public class Test {
private static IAmSoImmutable iAmSoImmutable;
private static final Runnable increment1000Times = () -> {
for (int i = 0; i < 1000; i++) {
iAmSoImmutable.increment();
}
};
public static void main(String... args) throws Exception {
for (int i = 0; i < 10; i++) {
iAmSoImmutable = new IAmSoImmutable(0);
Thread t1 = new Thread(increment1000Times);
Thread t2 = new Thread(increment1000Times);
t1.start();
t2.start();
t1.join();
t2.join();
// Prints a different result every time -- why? :
System.out.println(iAmSoImmutable.value);
}
}
public static class IAmSoImmutable {
private int value;
public IAmSoImmutable(int value) {
this.value = value;
}
public IAmSoImmutable increment() {
return new IAmSoImmutable(++value);
}
}
If you run this code you'll get different answers every time, which mean a race condition is happening.
You can not solve race condition without using any of existence synchronisation (or volatile) techniques. That what they were designed for. If it would be possible there would be no need of them.
More particularly your code seems to be broken. This method:
public IAmSoImmutable increment() {
return new IAmSoImmutable(++value);
}
is nonsense for two reasons:
1) It makes broken immutability of class, because it changes object's variable value.
2) Its result - new instance of class IAmSoImmutable - is never used.
The fundamental problem here is that you've misunderstood what "immutability" means.
"Immutability" means — no writes. Values are created, but are never modified.
Immutability ensures that there are no race conditions, because race conditions are always caused by writes: either two threads performing writes that aren't consistent with each other, or one thread performing writes and another thread performing reads that give inconsistent results, or similar.
(Caveat: even an immutable object is effectively mutable during construction — Java creates the object, then populates its fields — so in addition to being immutable in general, you need to use the final keyword appropriately and take care with what you do in the constructor. But, those are minor details.)
With that understanding, we can go back to your initial sentence:
I was thinking about how to solve race condition between two threads which tries to write to the same variable using immutable objects and without helping any keywords such as synchronize(lock)/volatile in java.
The problem here is that you actually aren't using immutable objects: your entire goal is to perform writes, and the entire concept of immutability is that no writes happen. These are not compatible.
That said, immutability certainly has its place. You can have immutable IAmSoImmutable objects, with the only writes being that you swap these objects out for each other. That helps simplify the problem, by reducing the scope of writes that you have to worry about: there's only one kind of write. But even that one kind of write will require synchronization.
The best approach here is probably to use an AtomicReference<IAmSoImmutable>. This provides a non-blocking way to swap out your IAmSoImmutable-s, while guaranteeing that no write gets silently dropped.
(In fact, in the special case that your value is just an integer, the JDK provides AtomicInteger that handles the necessary compare-and-swap loops and so on for threadsafe incrementation.)
Even if the problems are resolved by :
Avoiding the change of IAmSoImmutable.value
Reassigning the new object created within increment() back into the iAmSoImmutable reference.
There still are pieces of your code that are not atomic and that needs a sort of synchronization.
A solution would be to use a synchronized method of course
public synchronized static void increment() {
iAmSoImmutable = iAmSoImmutable.increment();
}
Thread t1 = new Thread(() -> {
for (int i = 0; i < 1000; i++) {
increment();
}
});
Thread t2 = new Thread(() -> {
for (int i = 0; i < 1000; i++) {
increment();
}
});

Combination of Singleton class and volatile variable

As far as I know, volatile variables will be always read and written from the main memory. Then I think about the Singleton class. Here is how my program is:
1. Singleton class
public class Singleton {
private static Singleton sin;
private static volatile int count;
static{
sin = new Singleton();
count = 0;
}
private Singleton(){
}
public static Singleton getInstance(){
return sin;
}
public String test(){
count++;
return ("Counted increased!" + count);
}
}
2. Main class
public class Java {
/**
* #param args the command line arguments
*/
public static void main(String[] args) {
Derived d1 = new Derived("d1");
d1.start();
Derived d2 = new Derived("d2");
d2.start();
Derived d3 = new Derived("d3");
d3.start();
}
;
}
class Derived extends Thread {
String name;
public Derived(String name){
this.name = name;
}
public void run() {
Singleton a = Singleton.getInstance();
for (int i = 0; i < 10; i++) {
System.out.println("Current thread: "+ name + a.test());
}
}
}
I know this maybe a dumb question, but i'm not good at multithreading in Java thus this problem confuses me a lot. I thought the static volatile int count variable in Singleton class will always have the latest value, but apparently it does not...
Can someone help me to understand this?
Thank you very much.
The problem is that volatile has nothing to do with thread synchronization. Even though the read from static volatile int count would indeed always return the latest value, multiple threads may write the same new value back into it.
Consider this scenario with two threads:
count is initialized zero
Thread A reads count, sees zero
Thread B reads count, sees zero
Thread A advances count to 1, stores 1
Thread B advances count to 1, stores 1
Thread A writes "Counted increased! 1"
Thread B writes "Counted increased! 1"
Both threads read the latest value, but since ++ is not an atomic operation, once the read is complete, each thread is on its own. Both threads independently compute the next value, and then store it back into the count variable. The net effect is that a variable is incremented once, even though both threads performed the increment.
If you would like to increment an int from multiple threads, use AtomicInteger.
As Jon Skeet indicated, it would be best if you use AtomicInteger. Using volatile variables reduces the risk of memory consistency errors, but it doesn't eliminate the need to synchronize atomic action.
I think this modification would help with your problem.
public synchronized String test(){
count++;
return ("Counted increased!" + count);
}
Reader threads are not doing any locking and until writer thread comes out of synchronized block, memory will not be synchronized and value of 'sin' will not be updated in main memory. both threads reads the same values and thus updates it by adding one, if you want to resolve make test method synchronised.
Read more: http://javarevisited.blogspot.com/2011/06/volatile-keyword-java-example-tutorial.html#ixzz3PGYRMtgE

Is synchronization needed while reading if no contention could occur

Consider code sniper below:
package sync;
public class LockQuestion {
private String mutable;
public synchronized void setMutable(String mutable) {
this.mutable = mutable;
}
public String getMutable() {
return mutable;
}
}
At time Time1 thread Thread1 will update ‘mutable’ variable. Synchronization is needed in setter in order to flush memory from local cache to main memory.
At time Time2 ( Time2 > Time1, no thread contention) thread Thread2 will read value of mutable.
Question is – do I need to put synchronized before getter? Looks like this won’t cause any issues - memory should be up to date and Thread2’s local cache memory should be invalidated&updated by Thread1, but I’m not sure.
Rather than wonder, why not just use the atomic references in java.util.concurrent?
(and for what it's worth, my reading of happens-before does not guarantee that Thread2 will see changes to mutable unless it also uses synchronized ... but I always get a headache from that part of the JLS, so use the atomic references)
It will be fine if you make mutable volatile, details in the "cheap read-write lock"
Are you absolutely sure that the getter will be called only after the setter is called? If so, you don't need the getter to be synchronized, since concurrent reads do not need to synchronized.
If there is a chance that get and set can be called concurrently then you definitely need to synchronize the two.
If you worry so much about the performance in the reading thread, then what you do is read the value once using proper synchronization or volatile or atomic references. Then you assign the value to a plain old variable.
The assign to the plain variable is guaranteed to happen after the atomic read (because how else could it get the value?) and if the value will never be written to by another thread again you are all set.
I think you should start with something which is correct and optimise later when you know you have an issue. I would just use AtomicReference unless a few nano-seconds is too long. ;)
public static void main(String... args) {
AtomicReference<String> ars = new AtomicReference<String>();
ars.set("hello");
long start = System.nanoTime();
int runs = 1000* 1000 * 1000;
int length = test(ars, runs);
long time = System.nanoTime() - start;
System.out.printf("get() costs " + 1000*time / runs + " ps.");
}
private static int test(AtomicReference<String> ars, int runs) {
int len = 0;
for (int i = 0; i < runs; i++)
len = ars.get().length();
return len;
}
Prints
get() costs 1219 ps.
ps is a pico-second, with is 1 millionth of a micro-second.
This probably will never result in incorrect behavior, but unless you also guarantee the order that the threads startup in, you cannot necessarily guarantee that the compiler didn't reorder the read in Thread2 before the write in Thread1. More specifically, the entire Java runtime only has to guarantee that threads execute as if they were run in serial. So, as long as the thread has the same output running serially under optimizations, the entire language stack (compiler, hardware, language runtime) can do
pretty much whatever it wants. Including allowing Thread2 to cache the the result of LockQuestion.getMutable().
In practice, I would be very surprised if that ever happened. If you want to guarantee that this doesn't happen, have LockQuestion.mutable be declared as final and get initialized in the constructor. Or use the following idiom:
private static class LazySomethingHolder {
public static Something something = new Something();
}
public static Something getInstance() {
return LazySomethingHolder.something;
}

Categories