I tested the below piece of code on HotSpot and Android ART, but with different results.
On HotSpot, MyThread never gets the updated isRunning, it get isRunning = true always...
But when I test it on ART, MyThread can get the updated isRunning and exit loop normally...
As I know about java happens-before rule, an non-volatile is not visible across multi-thread, just like the behave of the code below on Hotspot.
Does it depends on VM implementation? Or maybe Android ART has their own optimization?
class MyThread extends Thread {
public boolean isRunning = true;
#Override
public void run() {
System.out.println("MyThread running");
while (true) {
if (isRunning == false) break;
}
System.out.println("MyThread exit");
}
}
public class RunThread{
public static void main(String[] args) {
new RunThread().runMain();
}
public void runMain() {
MyThread thread = new MyThread();
try {
thread.start();
Thread.sleep(500);
thread.isRunning = false;
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
an non-volatile is not visible across multi-thread, just like the behave of the code below on Hotspot.
That's not quite right. A non-volatile write, absent any additional synchronization or other happens-before relationship, is not guaranteed to be visible to a read of that same variable in another thread. It's allowed to be visible, though. I've absolutely seen HotSpot make writes visible across threads despite no happens-before relationship. Based on my experience with this, I suspect that if you remove that Thread.sleep call in your code, HotSpot will also make the write to isRunning visible to the thread, despite the lack of any happens-before relationship between the write and the read.
You're definitely right that it's VM-specific, and it's possibly/likely even processor architecture–specific, since different architectures may give different amounts of synchronization for "free", or have different amounts of cache that affect whether a memory address is read from a core's cache or fetched from main memory.
In summary, you should never rely on this sort of behavior working any specific way on any specific VM—it's liable to change without warning.
Related
I am not very good at multithreading and am baffled by this code:
public class Main {
public static void main(String... args) throws Exception {
new Thread(Main::test).start();
}
private static synchronized void test() {
new Thread(Main::test).start();
System.out.println("TEST");
}
}
Can it result in a deadlock or not? If so, then why have I not been able to get it to deadlock? My thinking is, thread 1 acquires lock on test(), then another thread, created in test() tries to acquire it and they should be waiting on each other. But they aren't, why not?
I know, that adding join() in test() will make it result in a deadlock, but how come the example below doesn't use joins and deadlocks?
This code results in a deadlock literally every time I run it:
public class Main {
public static void main(String... args) {
new Thread(Main::test).start();
new Thread(Main::test2).start();
}
private static void test() {
synchronized (Integer.class) {
try {
Thread.sleep(1);
} catch (Exception e) {
}
synchronized (Float.class) {
System.out.println("Acquired float");
}
}
}
private static void test2() {
synchronized (Float.class) {
try {
Thread.sleep(1);
} catch (Exception e) {
}
synchronized (Integer.class) {
System.out.println("Acquired integer");
}
}
}
}
No, the code in the first example cannot deadlock. The newly started threads will simply wait until the previous thread exits the method to acquire the lock.
The code in the second example deadlocks because the locks are acquired in opposite order and because of the sleeps are reliably going to block each other.
When you're at the phase where you're first learning how to think about concurrency and related problems, I would very much recommend using physical props to keep your thoughts and hypotheses clear and explicit.
For example, grab a A3 sheet of paper, set up a "race track" where you use something like Monopoly pieces to signify what you're doing in your code, what you expect to happen, and what your experiments show actually happens.
When your experiments don't work out, take a small piece of the beginning first, and verify it. Then add some more, and so on.
It helps if you read about how actual computers (not the CS ideal or conceptual computers) currently work. How the CPU gets data out of the main memory into its cache. How two or three CPUs decide which one of them can handle data in one cache line at a time. Then, how the Java Memory Model needs you to write your source code so that the JVM knows what you actually mean to happen.
I'm looking for a recommendation on how to make this code thread-safe with locks in Java. I know there are a lot of gotchas with locks; obscure problems, race-conditions, etc that can pop up. Here is the basic idea of what I'm trying to achieve, implemented rather naïvely:
public class MultipleThreadWriter {
boolean isUpgrading=false;
boolean isWriting=false;
public void writeData(String uniqueId) {
if (isUpgrading)
//block until isUpgrading is false
isWriting = true;
{
//do write stuff
}
isWriting = false;
}
public void upgradeSystem() {
if (isWriting)
//block until isWriting is false
isUpgrading = true;
{
//do updates
}
isUpgrading = false;
}
}
The basic idea is that multiple threads are allowed to write data simultaneously. It doesn't matter, since no two threads will ever be writing to data pertaining to the same uniqueId. However, the "system upgrade" manipulates data for all uniqueIds, so it must block (wait in line) until no data is being written before it can start, at which point it blocks all writes until it is finished. (It is definitely not a consumer/producer pattern going on here- upgrading occurs arbitrarily, i.e. has no relation to the data being written.)
This sounds like a good application for a readers-writer lock.
However, in this case your "readers" are the small update tasks that can all run concurrently, and your "writer" is the system upgrade task.
There's an implementation of this in the Java standard library:
java.util.concurrent.locks.ReentrantReadWriteLock
The lock has fair and non-fair modes. If you want the system upgrade to run as soon as possible after it's scheduled, then use the fair mode of the lock. If you want the upgrade to be applied during idle time (i.e., wait until there are no small updates going on), then you can use the non-fair mode instead.
Since this is a bit of an unorthodox application of the readers-writer lock (your readers are actually writing too!), make sure to comment this well in your code. You might even consider writing a wrapper around the ReentrantReadWriteLock class that provides localUpdateLock vs globalUpdateLock methods, which delegate to the readLock and writeLock, respectively.
Based on answer from #DaoWen , this is my untested solution.
public class MultipleThreadWriter {
private final ReentrantReadWriteLock rwl = new ReentrantReadWriteLock();
private final Lock r = rwl.readLock();
private final Lock w = rwl.writeLock();
public void writeData() {
r.lock();
try {
//do write stuff
} finally {
r.unlock();
}
}
public void upgradeSystem() {
w.lock();
try {
//do updates
} finally {
w.unlock();
}
}
}
I wrote an example trying to understand volatile.
public class VolatileExample {
private volatile boolean close = false;
public void shutdown() {
close = true;
}
public void work(){
Thread t1 = new Thread(new Runnable(){
public void run(){
while (!close) {
}
}
});
Thread t2 = new Thread(new Runnable(){
public void run(){
while (!close) {
shutdown();
}
}
});
t1.start();
t2.start();
try {
t1.join();
t2.join();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
public static void main(String[] args){
VolatileExample volatileExample = new VolatileExample();
volatileExample.work();
}
}
it did stop as I expected. However, when I took the volatile away from the close tag, I have tried a lot of times---I expect the program will not stop because the thread t1 cannot see the change made by thread t2 on the close variable, but the programs ended successfully everytime. So I am confused, now that we can do it without volatile, what is volatile used for? Or can you give a better example that can make a difference between using volatile and not using volatile?
Thank you!
The memory model says only that changes to non- volatile fields may not be visible in other threads.
Perhaps your runtime environment was in a cooperative mood.
Changes to nonvolatile fields are sometimes visible to other threads, and sometimes not. How long they take to be visible to other threads can vary by orders of magnitude depending on what other processing the machine is doing, the number of processor chips and cores on the machine, the architecture of the cache memory on the machine, etc.
Ultimately, though, it comes down to this: buggy concurrency code can succeed the first 999,999 times, and fail on the millionth time. That often means it passes all tests, then fails in production when things really matter. For that reason, it's important when writing concurrent code that one make the best possible effort to ensure the code is correct - and that means using volatile for variables accessed from multiple threads even when it doesn't seem to make a difference in testing.
I'm having some issues with java multithreading, best explained on an example:
class Thread1 extends Thread
{
boolean val=false;
public void set()
{
val=true;
}
public void run()
{
while (true)
{
if(val==true)
{
System.out.println("true");
val=false;
}
try
{
sleep(1);
}
catch (InterruptedException e)
{
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}
So this is a simple class which is ran in a separate thread.
Now consider this case:
1) I start the thread in the class above
2) from some other thread I call the Thread1.set() function
3) the condition on the Thread1.run() function evaluates to true
Now, the thing is that if I remove the sleep(1) from the above code, this condition is never set to true.
So my question is: is there any other way I can interrupt the run() function so that
other functions may set the variables that would be used inside the run()function?
(I'm making a game on Android, so the openGL renderer runs in one thread and my game logic thread would run in another thread and I would like to sync them every frame or two),
If only a single thread (i.e. one other than the thread reading it) is modifying val, then make it volatile.
Your boolean variable is not volatile which means there is no guarantee that two different threads are seeing the same value. By sleeping it the virtual machine might cause the value set from a different thread to become visible to the thread (this is a guess - nothing more), but this behavior should not be relied upon in any way. You should either use a volatile boolean variable or an AtomicBoolean class depending on your needs.
Maybe this question has been asked many times before, but I never found a satisfying answer.
The problem:
I have to simulate a process scheduler, using the round robin strategy. I'm using threads to simulate processes and multiprogramming; everything works fine with the JVM managing the threads. But the thing is that now I want to have control of all the threads so that I can run each thread alone by a certain quantum (or time), just like real OS processes schedulers.
What I'm thinking to do:
I want have a list of all threads, as I iterate the list I want to execute each thread for their corresponding quantum, but as soon the time's up I want to pause that thread indefinitely until all threads in the list are executed and then when I reach the same thread again resume it and so on.
The question:
So is their a way, without using deprecated methods stop(), suspend(), or resume(), to have this control over threads?
Yes, there is:
Object.wait( ), Object.notify() and a bunch of other much nicer synchronization primitives in java.util.concurrent.
Who said Java is not low level enough?
Here is my 3 minute solution. I hope it fits your needs.
import java.util.ArrayList;
import java.util.List;
public class ThreadScheduler {
private List<RoundRobinProcess> threadList
= new ArrayList<RoundRobinProcess>();
public ThreadScheduler(){
for (int i = 0 ; i < 100 ; i++){
threadList.add(new RoundRobinProcess());
new Thread(threadList.get(i)).start();
}
}
private class RoundRobinProcess implements Runnable{
private final Object lock = new Object();
private volatile boolean suspend = false , stopped = false;
#Override
public void run() {
while(!stopped){
while (!suspend){
// do work
}
synchronized (lock){
try {
lock.wait();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
return;
}
}
}
}
public void suspend(){
suspend = true;
}
public void stop(){
suspend = true;stopped = true;
synchronized (lock){
lock.notifyAll();
}
}
public void resume(){
suspend = false;
synchronized (lock){
lock.notifyAll();
}
}
}
}
Please note that "do work" should not be blocking.
Short answer: no. You don't get to implement a thread scheduler in Java, as it doesn't operate at a low enough level.
If you really do intend to implement a process scheduler, I would expect you to need to hook into the underlying operating system calls, and as such I doubt this will ever be a good idea (if remotely possible) in Java. At the very least, you wouldn't be able to use java.lang.Thread to represent the running threads so it may as well all be done in a lower-level language like C.