Sorry for the question(( I just have stuck in the end of the day((
I need for test 10 threads writing to the same hash (really not a hash but very similar thing i need to prove it synchronization for write)
is this is right code?
Random rn = new Random();
Map<int,int> hash = new MyHashMap<int,int>();
for(int i = 0; i< 10; i++)
{
Thread th = new MyAddingThread();
th.Start();
}
public class MyAddingThread extends Thread{
public void run()
{
hash.Add(rn.nextInt,rn.nextInt);
}
}
May be better to change 10 to 100. but I have no idea how to test dat hash for synchronization(
HashMap is not thread-safe. Use a ConcurrentHashMap instead.
EDIT
If your real question is whether your code will allow you to test whether any given data structure is thread-safe or not, there really is no reliable way to do so. Multi-thread development can introduce any number of bugs that can be extremely difficult to detect. A data structure is either designed to be thread-safe, or it is not.
That won't work, as all threads should try to add the item to the hash at the same time.
To do that, you need to use a CountDownLatch
[[main]]
CountDownLatch startSignal = new CountDownLatch(1);
for(int i = 0; i< 10; i++)
{
Thread th = new MyAddingThread();
th.Start();
}
startSignal.countdown();
[[on the thread]]
public class MyAddingThread extends Thread{
public void run()
{
startSignal.await();
hash.Add(rn.nextInt,rn.nextInt);
}
}
The javadoc of this class has a similar (but more complex) example.
Also, if you want to test this properly, create the same number of threads as cores in your computer and each thread should do a loop and insert a few hundred items at least. if you try to only insert 10 elements, there are very little chances that you'll hit a concurrent problem.
A HashMap is not synchronized so you have to do it on your own!
You may synchronize the access:
public class MyAddingThread extends Thread {
public void run() {
synchronized(hash) {
hash.put(rn.nextInt(),rn.nextInt());
}
}
}
but than you miss most of the concurrent execution.
Another idea is to use a ConcurrentHashMap:
ConcurrentMap<Integer,Integer> hash = new ConcurrentHashMap<>();
public class MyAddingThread extends Thread {
public void run() {
hash.put(rn.nextInt(),rn.nextInt());
}
}
This would give more performance due to less blocking code.
The other problem is the use of Random here which is thread safe but will also result in poor performance.
Which solution is the best for your problem I cannot consider from your little pseudo code.
Related
I want the final count to be 10000 always but even though I have used synchronized here, Im getting different values other than 1000. Java concurrency newbie.
public class test1 {
static int count = 0;
public static void main(String[] args) throws InterruptedException {
int numThreads = 10;
Thread[] threads = new Thread[numThreads];
for(int i=0;i<numThreads;i++){
threads[i] = new Thread(new Runnable() {
#Override
public void run() {
synchronized (this) {
for (int i = 0; i < 1000; i++) {
count++;
}
}
}
});
}
for(int i=0;i<numThreads;i++){
threads[i].start();
}
for (int i=0;i<numThreads;i++)
threads[i].join();
System.out.println(count);
}
}
Boris told you how to make your program print the right answer, but the reason why it prints the right answer is, your program effectively is single threaded.
If you implemented Boris's suggestion, then your run() method probably looks like this:
public void run() {
synchronized (test1.class) {
for (int i = 0; i < 1000; i++) {
count++;
}
}
}
No two threads can ever be synchronized on the same object at the same time, and there's only one test1.class in your program. That's good because there's also only one count. You always want the number of lock objects and their lifetimes to match the number and lifetimes of the data that they are supposed to protect.
The problem is, you have synchronized the entire body of the run() method. That means, no two threads can run() at the same time. The synchronized block ensures that they all will have to execute in sequence—just as if you had simply called them one-by-one instead of running them in separate threads.
This would be better:
public void run() {
for (int i = 0; i < 1000; i++) {
synchronized (test1.class) {
count++;
}
}
}
If each thread releases the lock after each increment operation, then that gives other threads a chance to run concurrently.
On the other hand, all that locking and unlocking is expensive. The multi-threaded version almost certainly will take a lot longer to count to 10000 than a single threaded program would do. There's not much you can do about that. Using multiple CPUs to gain speed only works when there's big computations that each CPU can do independently of the others.
For your simple example, you can use AtomicInteger instead of static int and synchronized.
final AtomicInteger count = new AtomicInteger(0);
And inside Runnable only this one row:
count.IncrementAndGet();
Using syncronized blocks the whole class to be used by another threads if you have more complex codes with many of functions to use in a multithreaded code environment.
This code does'nt runs faster because of incrementing the same counter 1 by 1 is always a single operation which cannot run more than once at a moment.
So if you want to speed up running near 10x times faster, you should counting each thread it's own counter, than summing the results in the end. You can do this with ThreadPools using executor service and Future tasks wich can return a result for you.
In the tutorial of java multi-threading, it gives an exmaple of Memory Consistency Errors. But I can not reproduce it. Is there any other method to simulate Memory Consistency Errors?
The example provided in the tutorial:
Suppose a simple int field is defined and initialized:
int counter = 0;
The counter field is shared between two threads, A and B. Suppose thread A increments counter:
counter++;
Then, shortly afterwards, thread B prints out counter:
System.out.println(counter);
If the two statements had been executed in the same thread, it would be safe to assume that the value printed out would be "1". But if the two statements are executed in separate threads, the value printed out might well be "0", because there's no guarantee that thread A's change to counter will be visible to thread B — unless the programmer has established a happens-before relationship between these two statements.
I answered a question a while ago about a bug in Java 5. Why doesn't volatile in java 5+ ensure visibility from another thread?
Given this piece of code:
public class Test {
volatile static private int a;
static private int b;
public static void main(String [] args) throws Exception {
for (int i = 0; i < 100; i++) {
new Thread() {
#Override
public void run() {
int tt = b; // makes the jvm cache the value of b
while (a==0) {
}
if (b == 0) {
System.out.println("error");
}
}
}.start();
}
b = 1;
a = 1;
}
}
The volatile store of a happens after the normal store of b. So when the thread runs and sees a != 0, because of the rules defined in the JMM, we must see b == 1.
The bug in the JRE allowed the thread to make it to the error line and was subsequently resolved. This definitely would fail if you don't have a defined as volatile.
This might reproduce the problem, at least on my computer, I can reproduce it after some loops.
Suppose you have a Counter class:
class Holder {
boolean flag = false;
long modifyTime = Long.MAX_VALUE;
}
Let thread_A set flag as true, and save the time into
modifyTime.
Let another thread, let's say thread_B, read the Counter's flag. If thread_B still get false even when it is later than modifyTime, then we can say we have reproduced the problem.
Example code
class Holder {
boolean flag = false;
long modifyTime = Long.MAX_VALUE;
}
public class App {
public static void main(String[] args) {
while (!test());
}
private static boolean test() {
final Holder holder = new Holder();
new Thread(new Runnable() {
#Override
public void run() {
try {
Thread.sleep(10);
holder.flag = true;
holder.modifyTime = System.currentTimeMillis();
} catch (Exception e) {
e.printStackTrace();
}
}
}).start();
long lastCheckStartTime = 0L;
long lastCheckFailTime = 0L;
while (true) {
lastCheckStartTime = System.currentTimeMillis();
if (holder.flag) {
break;
} else {
lastCheckFailTime = System.currentTimeMillis();
System.out.println(lastCheckFailTime);
}
}
if (lastCheckFailTime > holder.modifyTime
&& lastCheckStartTime > holder.modifyTime) {
System.out.println("last check fail time " + lastCheckFailTime);
System.out.println("modify time " + holder.modifyTime);
return true;
} else {
return false;
}
}
}
Result
last check time 1565285999497
modify time 1565285999494
This means thread_B get false from Counter's flag filed at time 1565285999497, even thread_A has set it as true at time 1565285999494(3 milli seconds ealier).
The example used is too bad to demonstrate the memory consistency issue. Making it work will require brittle reasoning and complicated coding. Yet you may not be able to see the results. Multi-threading issues occur due to unlucky timing. If someone wants to increase the chances of observing issue, we need to increase chances of unlucky timing.
Following program achieves it.
public class ConsistencyIssue {
static int counter = 0;
public static void main(String[] args) throws InterruptedException {
Thread thread1 = new Thread(new Increment(), "Thread-1");
Thread thread2 = new Thread(new Increment(), "Thread-2");
thread1.start();
thread2.start();
thread1.join();
thread2.join();
System.out.println(counter);
}
private static class Increment implements Runnable{
#Override
public void run() {
for(int i = 1; i <= 10000; i++)
counter++;
}
}
}
Execution 1 output: 10963,
Execution 2 output: 14552
Final count should have been 20000, but it is less than that. Reason is count++ is multi step operation,
1. read count
2. increment count
3. store it
two threads may read say count 1 at once, increment it to 2. and write out 2. But if it was a serial execution it should have been 1++ -> 2++ -> 3.
We need a way to make all 3 steps atomic. i.e to be executed by only one thread at a time.
Solution 1: Synchronized
Surround the increment with Synchronized. Since counter is static variable you need to use class level synchronization
#Override
public void run() {
for (int i = 1; i <= 10000; i++)
synchronized (ConsistencyIssue.class) {
counter++;
}
}
Now it outputs: 20000
Solution 2: AtomicInteger
public class ConsistencyIssue {
static AtomicInteger counter = new AtomicInteger(0);
public static void main(String[] args) throws InterruptedException {
Thread thread1 = new Thread(new Increment(), "Thread-1");
Thread thread2 = new Thread(new Increment(), "Thread-2");
thread1.start();
thread2.start();
thread1.join();
thread2.join();
System.out.println(counter.get());
}
private static class Increment implements Runnable {
#Override
public void run() {
for (int i = 1; i <= 10000; i++)
counter.incrementAndGet();
}
}
}
We can do with semaphores, explicit locking too. but for this simple code AtomicInteger is enough
Sometimes when I try to reproduce some real concurrency problems, I use the debugger.
Make a breakpoint on the print and a breakpoint on the increment and run the whole thing.
Releasing the breakpoints in different sequences gives different results.
Maybe to simple but it worked for me.
Please have another look at how the example is introduced in your source.
The key to avoiding memory consistency errors is understanding the happens-before relationship. This relationship is simply a guarantee that memory writes by one specific statement are visible to another specific statement. To see this, consider the following example.
This example illustrates the fact that multi-threading is not deterministic, in the sense that you get no guarantee about the order in which operations of different threads will be executed, which might result in different observations across several runs. But it does not illustrate a memory consistency error!
To understand what a memory consistency error is, you need to first get an insight about memory consistency. The simplest model of memory consistency has been introduced by Lamport in 1979. Here is the original definition.
The result of any execution is the same as if the operations of all the processes were executed in some sequential order and the operations of each individual process appear in this sequence in the order specified by its program
Now, consider this example multi-threaded program, please have a look at this image from a more recent research paper about sequential consistency. It illustrates what a real memory consistency error might look like.
To finally answer your question, please note the following points:
A memory consistency error always depends on the underlying memory model (A particular programming languages may allow more behaviours for optimization purposes). What's the best memory model is still an open research question.
The example given above gives an example of sequential consistency violation, but there is no guarantee that you can observe it with your favorite programming language, for two reasons: it depends on the programming language exact memory model, and due to undeterminism, you have no way to force a particular incorrect execution.
Memory models are a wide topic. To get more information, you can for example have a look at Torsten Hoefler and Markus Püschel course at ETH Zürich, from which I understood most of these concepts.
Sources
Leslie Lamport. How to Make a Multiprocessor Computer That Correctly Executes Multiprocessor Programs, 1979
Wei-Yu Chen, Arvind Krishnamurthy, Katherine Yelick, Polynomial-Time Algorithms for Enforcing Sequential Consistency in SPMD Programs with Arrays, 2003
Design of Parallel and High-Performance Computing course, ETH Zürich
I'm learning multithreaded counter and I'm wondering why no matter how many times I ran the code it produces the right result.
public class MainClass {
public static void main(String[] args) {
Counter counter = new Counter();
for (int i = 0; i < 3; i++) {
CounterThread thread = new CounterThread(counter);
thread.start();
}
}
}
public class CounterThread extends Thread {
private Counter counter;
public CounterThread(Counter counter) {
this.counter = counter;
}
public void run() {
for (int i = 0; i < 10; i++) {
this.counter.add();
}
this.counter.print();
}
}
public class Counter {
private int count = 0;
public void add() {
this.count = this.count + 1;
}
public void print() {
System.out.println(this.count);
}
}
And this is the result
10
20
30
Not sure if this is just a fluke or is this expected? I thought the result is going to be
10
10
10
Try increasing the loop count from 10 to 10000 and you'll likely see some differences in the output.
The most logical explanation is that with only 10 additions, a thread is too fast to finish before the next thread gets started and adds on top of the previous result.
I'm learning multithreaded counter and I'm wondering why no matter how many times I ran the code it produces the right result.
<ttdr> Check out #manouti's answer. </ttdr>
Even though you are sharing the same Counter object, which is unsynchronized, there are a couple of things that are causing your 3 threads to run (or look like they are running) serially with data synchronization. I had to work hard on my 8 proc Intel Linux box to get it to show any interleaving.
When threads start and when they finish, there are memory barriers that are crossed. According to the Java Memory Model, the guarantee is that the thread that does the thread.join() will see the results of the thread published to it but I suspect a central memory flush happens when the thread finishes. This means that if the threads run serially (and with such a small loop it's hard for them not to) they will act as if there is no concurrency because they will see each other's changes to the Counter.
Putting a Thread.sleep(100); at the front of the thread run() method causes it to not run serially. It also hopefully causes the threads to cache the Counter and not see the results published by other threads that have already finished. Still needed help though.
Starting the threads in a loop after they all have been instantiated helps concurrency.
Another thing that causes synchronization is:
System.out.println(this.count);
System.out is a Printstream which is a synchronized class. Every time a thread calls println(...) it is publishing its results to central memory. If you instead recorded the value and then displayed it later, it might show better interleaving.
I really wonder if some Java compiler inlining of the Counter class at some point is causing part of the artificial synchronization. For example, I'm really surprised that a Thread.sleep(1000) at the front and end of the thread.run() method doesn't show 10,10,10.
It should be noted that on a non-intel architecture, with different memory and/or thread models, this might be easier to reproduce.
Oh, as commentary and apropos of nothing, typically it is recommended to implement Runnable instead of extending Thread.
So the following is my tweaks to your test program.
public class CounterThread extends Thread {
private Counter counter;
int result;
...
public void run() {
try {
Thread.sleep(100);
} catch (InterruptedException e1) {
Thread.currentThread().interrupt(); // good pattern
return;
}
for (int i = 0; i < 10; i++) {
counter.add();
}
result = counter.count;
// no print here
}
}
Then your main could do something like:
Counter counter = new Counter();
List<CounterThread> counterThreads = new ArrayList<>();
for (int i = 0; i < 3; i++) {
counterThread.add(new CounterThread(counter));
}
// start in a loop after constructing them all which improves the overlap chances
for (CounterThread counterThread : counterThreads) {
counterThread.start();
}
// wait for them to finish
for (CounterThread counterThread : counterThreads) {
counterThread.join();
}
// print the results
for (CounterThread counterThread : counterThreads) {
System.out.println(counterThread.result);
}
Even with this, I never see 10,10,10 output on my box and I often see 10,20,30. Closest I get is 12,12,12.
Shows you how hard it is to properly test a threaded program. Believe me, if this code was in production and you were expecting the "free" synchronization is when it would fail you. ;-)
I was thinking about how to solve race condition between two threads which tries to write to the same variable using immutable objects and without helping any keywords such as synchronize(lock)/volatile in java.
But I couldn't figure it out, is it possible to solve this problem with such solution at all?
public class Test {
private static IAmSoImmutable iAmSoImmutable;
private static final Runnable increment1000Times = () -> {
for (int i = 0; i < 1000; i++) {
iAmSoImmutable.increment();
}
};
public static void main(String... args) throws Exception {
for (int i = 0; i < 10; i++) {
iAmSoImmutable = new IAmSoImmutable(0);
Thread t1 = new Thread(increment1000Times);
Thread t2 = new Thread(increment1000Times);
t1.start();
t2.start();
t1.join();
t2.join();
// Prints a different result every time -- why? :
System.out.println(iAmSoImmutable.value);
}
}
public static class IAmSoImmutable {
private int value;
public IAmSoImmutable(int value) {
this.value = value;
}
public IAmSoImmutable increment() {
return new IAmSoImmutable(++value);
}
}
If you run this code you'll get different answers every time, which mean a race condition is happening.
You can not solve race condition without using any of existence synchronisation (or volatile) techniques. That what they were designed for. If it would be possible there would be no need of them.
More particularly your code seems to be broken. This method:
public IAmSoImmutable increment() {
return new IAmSoImmutable(++value);
}
is nonsense for two reasons:
1) It makes broken immutability of class, because it changes object's variable value.
2) Its result - new instance of class IAmSoImmutable - is never used.
The fundamental problem here is that you've misunderstood what "immutability" means.
"Immutability" means — no writes. Values are created, but are never modified.
Immutability ensures that there are no race conditions, because race conditions are always caused by writes: either two threads performing writes that aren't consistent with each other, or one thread performing writes and another thread performing reads that give inconsistent results, or similar.
(Caveat: even an immutable object is effectively mutable during construction — Java creates the object, then populates its fields — so in addition to being immutable in general, you need to use the final keyword appropriately and take care with what you do in the constructor. But, those are minor details.)
With that understanding, we can go back to your initial sentence:
I was thinking about how to solve race condition between two threads which tries to write to the same variable using immutable objects and without helping any keywords such as synchronize(lock)/volatile in java.
The problem here is that you actually aren't using immutable objects: your entire goal is to perform writes, and the entire concept of immutability is that no writes happen. These are not compatible.
That said, immutability certainly has its place. You can have immutable IAmSoImmutable objects, with the only writes being that you swap these objects out for each other. That helps simplify the problem, by reducing the scope of writes that you have to worry about: there's only one kind of write. But even that one kind of write will require synchronization.
The best approach here is probably to use an AtomicReference<IAmSoImmutable>. This provides a non-blocking way to swap out your IAmSoImmutable-s, while guaranteeing that no write gets silently dropped.
(In fact, in the special case that your value is just an integer, the JDK provides AtomicInteger that handles the necessary compare-and-swap loops and so on for threadsafe incrementation.)
Even if the problems are resolved by :
Avoiding the change of IAmSoImmutable.value
Reassigning the new object created within increment() back into the iAmSoImmutable reference.
There still are pieces of your code that are not atomic and that needs a sort of synchronization.
A solution would be to use a synchronized method of course
public synchronized static void increment() {
iAmSoImmutable = iAmSoImmutable.increment();
}
Thread t1 = new Thread(() -> {
for (int i = 0; i < 1000; i++) {
increment();
}
});
Thread t2 = new Thread(() -> {
for (int i = 0; i < 1000; i++) {
increment();
}
});
Question says it all really. Code can be found at the following link -> http://www.mindrot.org/projects/jBCrypt/
I strongly disagree with the existing answers, and the currently-accepted answer.
After a brief review of the source code for jBcrypt, I'm satisfied that it is thread-safe. The only instance created does not escape the scope of the key hashpw static method, and its fields are not shared with any other instances through any mechanism that I can see.
Further, I'm really confused by the complaints about the API consisting of static methods. Hashing functions are pure, and so there is no reason to not provide them via static methods. I'm actually quite glad that there is no usable userland access to instance methods, lest some fool try to do something clever (like using one instance over and over, "resetting" it to minimize allocation/GC or some such silliness).
Resurrecting this old issue because I believe the current answers are wrong and this thread pops high in google searches.
Looked at the code and ran a couple of simple tests. jbcryt appears to be thread safe.
In the code:
-although it has mostly static method, those methods create instances of the bcrypt class to do computations when needed.
Test samples:
-had bcrypt not been thread safe, I would have expected either of these methods to throw some form of error, which they did not.
#Test
public void multiThreadHash() throws InterruptedException{
List<Thread> threads = new ArrayList<Thread>();
for(int i = 0; i<40; i++){
threads.add(new Thread(){
#Override
public void run() {
long start = System.currentTimeMillis();
while(System.currentTimeMillis()<start+12000){
String password = "sample";
String hash = BCrypt.gensalt(4);
String hashed = BCrypt.hashpw(password, hash);
}
}
});
}
for(Thread t: threads){
t.start();
}
Thread.sleep(12200L);
}
#Test
public void multiThreadReproductibleHash() throws InterruptedException{
List<Thread> threads = new ArrayList<Thread>();
for(int i = 0; i<40; i++){
threads.add(new Thread(){
#Override
public void run() {
long start = System.currentTimeMillis();
while(System.currentTimeMillis()<start+12000){
String password = "sample";
String hash = "$2a$04$/YkrS2ifyAloNVUk5qAO7O";
String expected = "$2a$04$/YkrS2ifyAloNVUk5qAO7OlqIsp2ECTMDinOij9wvn7nXPRJCo8Gy";
String hashed = BCrypt.hashpw(password, hash);
Assert.assertEquals(expected, hashed);
}
}
});
}
for(Thread t: threads){
t.start();
}
Thread.sleep(12200L);
}
First, it's not documented as thread-safe, so for all intents and purposes, it's not. And, on further investigation, it's definitely not: It turns out that, while there are some instance fields, there is no instance of BCrypt exposed; it tries to do everything through static methods. It may not be thread safe. It's small enough that, assuming you care and the author would accept, you could offer a patch to trivially convert it to provide separate, safe instances (edit to add: I have scrubbed it carefully and prepared a cleaner version, which I will send to the author...)
Second, in what manner do you want to use this in a multithreaded environment? It's not clear to me what you'd want to do in separate threads.
NOTE: There is a dissenting opinion below with more upvotes as of 7/18/2013