What does "thread-safe" really mean? [duplicate] - java

This question already has answers here:
What is the meaning of the term "thread-safe"?
(17 answers)
Closed 9 years ago.
From Java Concurrency In Practice:
package net.jcip.examples;
import java.util.concurrent.atomic.*;
/**
* NumberRange
* <p/>
* Number range class that does not sufficiently protect its invariants
*
* #author Brian Goetz and Tim Peierls
*/
public class NumberRange {
// INVARIANT: lower <= upper
private final AtomicInteger lower = new AtomicInteger(0);
private final AtomicInteger upper = new AtomicInteger(0);
public void setLower(int i) {
// Warning -- unsafe check-then-act
if (i > upper.get())
throw new IllegalArgumentException("can't set lower to " + i + " > upper");
lower.set(i);
}
public void setUpper(int i) {
// Warning -- unsafe check-then-act
if (i < lower.get())
throw new IllegalArgumentException("can't set upper to " + i + " < lower");
upper.set(i);
}
public boolean isInRange(int i) {
return (i >= lower.get() && i <= upper.get());
}
}
It says “Both setLower and setUpper are check-then-act sequences, but they do not use sufficient locking to make them atomic. If the number range holds (0, 10), and one thread calls setLower(5) while another thread calls setUpper(4), with some unlucky timing both will pass the checks in the setters and both modifications will be applied. The result is that the range now holds (5, 4) an invalid state.”
How could it happen if AtomicIntegers are thread-safe, did I miss some points? And how to fix this?

The involvement of AtomicInteger has nothing to do with the thread safety of your question.
Here is the problem:
if (i > upper.get)
lower.set(i);
steps 1 and 2 may be individually atomic, but together they form a two step, non-atomic, action.
Here is what can happen:
if blammy passes
context switch to another thread.
the other thread calls upper.set(q) such that q < i
context switch back to this thread.
lower is set to i.
each individual step is atomic in nature, but the collection of steps is not atomic.
A java solution for this is:
synchronized(some_object_reference, maybe this)
{
if (i > upper.get)
lower.set(i)
}
Be sure to us the same object reference to synchronize all setting of the upper and lower values.

Lets create an object:
NumberRange nr= new NumberRange();
Thread A:
nr.setLower(-1); //A1
Thread B:
nr.setLower(-3); //B1
nr.setUpper(-2); //B2
Execution order: B1, then A1 and B2 at the same time: If thread B pass the check before A (-3 < -2), and then A pass its check before B sets the value (-1 < 0) , this code won't throw any error because your methods are not atomic. The check is atomic, and the set method too, but together you have 2 atomic steps, not one.

Related

How is codahale metrics Meter mark() method threadsafe?

I have recently begun to learn CodaHale/DropWizard metrics library. I cannot understand how is the Meter class thread-safe (it is according to the documentation), especially mark() and tickIfNecessary() methods here:
https://github.com/dropwizard/metrics/blob/3.2-development/metrics-core/src/main/java/com/codahale/metrics/Meter.java#L54-L77
public void mark(long n) {
tickIfNecessary();
count.add(n);
m1Rate.update(n);
m5Rate.update(n);
m15Rate.update(n);
}
private void tickIfNecessary() {
final long oldTick = lastTick.get();
final long newTick = clock.getTick();
final long age = newTick - oldTick;
if (age > TICK_INTERVAL) {
final long newIntervalStartTick = newTick - age % TICK_INTERVAL;
if (lastTick.compareAndSet(oldTick, newIntervalStartTick)) {
final long requiredTicks = age / TICK_INTERVAL;
for (long i = 0; i < requiredTicks; i++) {
m1Rate.tick();
m5Rate.tick();
m15Rate.tick();
}
}
}
}
I can see that there is a lastTick of type AtomicLong, but still there can be a situation that m1-m15 rates are ticking a little bit longer so another thread can invoke those ticks as well as a part of next TICK_INTERVAL. Wouldn't that be a race condition since tick() method of Rates is not synchronized at all? https://github.com/dropwizard/metrics/blob/3.2-development/metrics-core/src/main/java/com/codahale/metrics/EWMA.java#L86-L95
public void tick() {
final long count = uncounted.sumThenReset();
final double instantRate = count / interval;
if (initialized) {
rate += (alpha * (instantRate - rate));
} else {
rate = instantRate;
initialized = true;
}
}
Thanks,
Marian
It is thread safe because this line from tickIfNecessary() returns true only once per newIntervalStartTick
if (lastTick.compareAndSet(oldTick, newIntervalStartTick))
What happens if two threads enter tickIfNecessary() at almost the same time?
Both threads read the same value from oldTick, decide that at least TICK_INTERVAL nanoseconds have passed and calculate a newIntervalStartTick.
Now both threads try to do lastTick.compareAndSet(oldTick, newIntervalStartTick). As the name compareAndSet implies, this method compares to current value of lastTick to oldTick and only if the value is equal to oldTick it gets atomically replaced with newIntervalStartTick and returns true.
Since this is an atomic instruction (at the hardware level!), only one thread can succeed. When the other thread executes this method it will already see newIntervalStartTick as the current value of lastTick. Since this value no longer matches oldTick the update fails and the method returns false and therefore this thread does not call m1Rate.tick() to m15Rate.tick().
The EWMA.update(n) method uses a java.util.concurrent.atomic.LongAdder to accumulate the event counts that gives similar thread safety guarantees.
As far as I can see you are right. If tickIfNecessary() is called such that age > TICK_INTERVAL while another call is still running, it is possible that m1Rate.tick() and the other tick() methods are called at the same time from multiple threads. So it boils down to wether tick() and its called routines/operations are safe.
Let's dissect tick():
public void tick() {
final long count = uncounted.sumThenReset();
final double instantRate = count / interval;
if (initialized) {
rate += (alpha * (instantRate - rate));
} else {
rate = instantRate;
initialized = true;
}
}
alpha and interval are set only on instance initialization and marked final those thread-safe since read-only. count and instantRate are local and those not visible to other threads anyway. rate and initialized are marked volatile and those writes should always be visible for following reads.
If I'm not wrong, pretty much from the first read of initialized to the last write on either initialized or rate this is open for races but some are without effect like when 2 threads race for the switch of initialized to true.
It seems the majority of effective races can happen in rate += (alpha * (instantRate - rate)); especially dropped or mixed calculations like:
Assumed: initialized is true
Thread1: calculates count, instantRate, checks initialized, does the first read of rate which we call previous_rate and for whatever reason stalls
Thread2: calculates count, instantRate, checks initialized, and calculates rate += (alpha * (instantRate - rate));
Thread1: continues its operation and calculates rate += (alpha * (instantRate - previous_rate));
A drop would occur if the reads and writes somehow get ordered such that rate is read on all threads and then written on all threads, effectively dropping one or more calculations.
But the probability for such races, meaning that both age > TICK_INTERVAL matches such that 2 Threads run into the same tick() method and especially the rate += (alpha * (instantRate - rate)) may be extremely low and depending on the values not noticeable.
The mark() method seems to be thread-safe as long as the LongAdderProxy uses a thread-safe Data-structure for update/add and for the tick() method in sumThenReset.
I think the only ones who can answer the Questions left open - wether the races are without noticeable effect or otherwise mitigated - are the project authors or people who have in depth knowledge of these parts of the project and the values calculated.

Unexpected result in multithreaded program

This simple program has a shared array and 2 threads:
first thread - shows sum of values in the array.
second thread - subtracts 200 from one cell of the array and adds 200 to another cell.
I would expect to see the results: 1500 (sum of the array), 1300 (if the display occurs between the subtraction and the addition).
But for some reason, sometimes 1100 and 1700 appear, which I can't explain...
public class MainClass {
public static void main(String[] args) {
Bank bank = new Bank();
bank.CurrentSum.start();
bank.TransferMoney.start();
}
}
class Bank {
private int[] Accounts = { 100, 200, 300, 400, 500 };
private Random rnd = new Random();
Thread CurrentSum = new Thread("Show sum") {
public void run() {
for (int i = 0; i < 500; i++) {
System.out.println(Accounts[0] + Accounts[1] + Accounts[2]
+ Accounts[3] + Accounts[4]);
}
}
};
Thread TransferMoney = new Thread("Tranfer"){
public void run(){
for(int i=0; i<50000; i++)
{
Accounts[rnd.nextInt(5)]-=200;
Accounts[rnd.nextInt(5)]+=200;
}
}
};
}
You are not updating the values in an atomic or thread safe manner. This means sometimes you see two more -200 than +200 and sometimes you see two more +200 than -200. As you iterate over the values it is possible to see a +200 value but the -200 value is an earlier value and you miss it, but you see another +200 update again missing the -200 change.
It should be possible to see up to 5 x +200 or 5 x -200 in rare cases.
It's happening because the addition of the five values is not atomic, and may be interrupted by the decrement and increment happening in the other thread.
Here's a possible case.
The display thread adds Accounts[0]+Accounts[1]+Accounts[2].
The updating thread decrements Accounts[0] and increments Accounts[3].
The updating thread decrements Accounts[1] and increments Accounts[4].
The display thread continues with its addition, adding Accounts[3] and Accounts[4] to the sum that it had already partially evaluated.
In this case, the sum will be 1900, because you've included two values after they've been incremented.
You should be able to work out cases like this, to give you sums of anything between 700 and 2300.
Perhaps on purpose, you are not doing the addition operation atomically.
That means that this line:
System.out.println(Accounts[0] + Accounts[1] + Accounts[2]
+ Accounts[3] + Accounts[4]);
Will run in multiple steps, any of which can occur during any iteration of the second thread.
1. Get value of Accounts[0] = a
2. Get value of Accounts[1] = b
...So on
The addition then happens after all the values are pulled from the array.
You can imagine that 200 is subtracted from Accounts[0], which is dereferenced by the JRE, then in another loop of the second thread, 200 is removed from Accounts[1], which is subsequently dereferenced by the JRE. This can result in the the output you see.
The Accounts variable is being accessed from more than one thread, one of which modifies its value. In order for the other thread to reliably read the modified values at all it is necessary to use a "memory barrier". Java has a number of ways of providing a memory barrier: synchronized, volatile or one of the Atomic types are the most common.
The Bank class also has some logic which requires the modifications to be made in multiple steps before the Accounts variable is back in a consistent state. The synchronized keyword can also be used to prevent another block of code that is synchronised on the same object from running until the first synchronized block has completed.
This implementation of the Bank class locks all access to the Accounts variable using the mutex lock object of the Bank object that owns the Accounts variable. This ensures that each synchronised block is run in its entirety before the other thread can run its own synchronised block. It also ensures that changes to the Accounts variable are visible to the other thread:
class Bank {
private int[] Accounts = { 100, 200, 300, 400, 500 };
private Random rnd = new Random();
Thread CurrentSum = new Thread("Show sum") {
public void run() {
for (int i = 0; i < 500; i++) {
printAccountsTotal();
}
}
};
Thread TransferMoney = new Thread("Tranfer"){
public void run(){
for(int i=0; i<50000; i++)
{
updateAccounts();
}
}
};
synchronized void printAccountsTotal() {
System.out.println(Accounts[0] + Accounts[1] + Accounts[2]
+ Accounts[3] + Accounts[4]);
}
synchronized void updateAccounts() {
Accounts[rnd.nextInt(5)]-=200;
Accounts[rnd.nextInt(5)]+=200;
}
}

Object Sharing in Simple Multi-threaded Program

Introduction
I have written a very simple program as an attempt to re-introduce myself to multi-threaded programming in JAVA. The objective of my program is derived from this rather neat set of articles, written by Jakob Jankov. For the program's original, unmodified version, consult the bottom of the linked article.
Jankov's program does not System.out.println the variables, so you cannot see what is happening. If you .print the resulting value you get the same results, every time (the program is thread safe); however, if you print some of the inner workings, the "inner behaviour" is different, each time.
I understand the issues involved in thread scheduling and the unpredictability of a thread's Running. I believe that may be a factor in the question I ask, below.
Program's Three Parts
The Main Class:
public class multiThreadTester {
public static void main (String[] args) {
// Counter object to be shared between two threads:
Counter counter = new Counter();
// Instantiation of Threads:
Thread counterThread1 = new Thread(new CounterThread(counter), "counterThread1");
Thread counterThread2 = new Thread(new CounterThread(counter), "counterThread2");
counterThread1.start();
counterThread2.start();
}
}
The objective of the above class is simply to share an object. In this case, the threads share an object of type Counter:
Counter Class
public class Counter {
long count = 0;
// Adding a value to count data member:
public synchronized void add (long value) {
this.count += value;
}
public synchronized long getValue() {
return count;
}
}
The above is simply the definition of the Counter class, which includes only a primitive member of type long.
CounterThread Class
Below, is the CounterThread class, virtually unmodified from the code provided by Jankov. The only real difference (despite my implementing Runnable as opposed to extending Thread) is the addition of System.out.println(). I added this to watch the inner-workings of the program.
public class CounterThread implements Runnable {
protected Counter counter = null;
public CounterThread(Counter aCounter) {
this.counter = aCounter;
}
public void run() {
for (int i = 0; i < 10; i++) {
System.out.println("BEFORE add - " + Thread.currentThread().getName() + ": " + this.counter.getValue());
counter.add(i);
System.out.println("AFTER add - " + Thread.currentThread().getName() + ": " + this.counter.getValue());
}
}
}
Question
As you can see, the code is very simple. The above code's only purpose is to watch what happens as two threads share a thread-safe object.
My question comes as a result of the output of the program (which I have tried to condense, below). The output is hard to "get consistent" to demonstrate my question, as the spread of the difference (see below) can be quite great:
Here's the condensed output (trying to minimize what you look at):
AFTER add - counterThread1: 0
BEFORE add - counterThread1: 0
AFTER add - counterThread1: 1
BEFORE add - counterThread1: 1
AFTER add - counterThread1: 3
BEFORE add - counterThread1: 3
AFTER add - counterThread1: 6
BEFORE add - counterThread1: 6
AFTER add - counterThread1: 10
BEFORE add - counterThread2: 0 // This BEFORE add statement is the source of my question
And one more output that better demonstrates:
BEFORE add - counterThread1: 0
AFTER add - counterThread1: 0
BEFORE add - counterThread1: 0
AFTER add - counterThread1: 1
BEFORE add - counterThread2: 0
AFTER add - counterThread2: 1
BEFORE add - counterThread2: 1
AFTER add - counterThread2: 2
BEFORE add - counterThread2: 2
AFTER add - counterThread2: 4
BEFORE add - counterThread2: 4
AFTER add - counterThread2: 7
BEFORE add - counterThread2: 7
AFTER add - counterThread2: 11
BEFORE add - counterThread1: 1 // Here, counterThread1 still believes the value of Counter's counter is 1
AFTER add - counterThread1: 13
BEFORE add - counterThread1: 13
AFTER add - counterThread1: 16
BEFORE add - counterThread1: 16
AFTER add - counterThread1: 20
My question(s):
Thread safety ensures the safe mutability of a variable, i.e. only one thread can access an object at a time. Doing this ensures that the "read" and "write" methods will behave, appropriately, only writing after a thread has released its lock (eliminating racing).
Why, despite the correct write behaviour, does counterThread2 "believe" Counter's value (not the iterator i) to still be zero? What is happening in memory? Is this a matter of the thread containing it's own, local Counter object?
Or, more simply, after counterThread1 has updated the value, why does counterThread2 not see - in this case, System.out.println() - the correct value? Despite not seeing the value, the correct value is written to the object.
Why, despite the correct write behaviour, does counterThread2 "believe" Counter's value to still be zero?
The threads interleaved in such a way to cause this behaviour. Because the print statements are outside of the synchronised block, it is possible for a thread to read the counter value then pause due to is scheduling while the other thread increments multiple times. When the waiting thread finally resumes and enters the inc counter method, the value of the counter will have moved on quite a bit and will no longer match what was printed in the BEFORE log line.
As an example, I have modified your code to make it more evident that both threads are working on the same counter. First I have moved the print statements into the counter, then I added a unique thread label so that we can tell which thread was responsible for the increment and finally I only increment by one so that any jumps in the counter value will stand out more clearly.
public class Main {
public static void main (String[] args) {
// Counter object to be shared between two threads:
Counter counter = new Counter();
// Instantiation of Threads:
Thread counterThread1 = new Thread(new CounterThread("A",counter), "counterThread1");
Thread counterThread2 = new Thread(new CounterThread("B",counter), "counterThread2");
counterThread1.start();
counterThread2.start();
}
}
class Counter {
long count = 0;
// Adding a value to count data member:
public synchronized void add (String label, long value) {
System.out.println(label+ " BEFORE add - " + Thread.currentThread().getName() + ": " + this.count);
this.count += value;
System.out.println(label+ " AFTER add - " + Thread.currentThread().getName() + ": " + this.count);
}
public synchronized long getValue() {
return count;
}
}
class CounterThread implements Runnable {
private String label;
protected Counter counter = null;
public CounterThread(String label, Counter aCounter) {
this.label = label;
this.counter = aCounter;
}
public void run() {
for (int i = 0; i < 10; i++) {
counter.add(label, 1);
}
}
}

Why is i++ not atomic?

Why is i++ not atomic in Java?
To get a bit deeper in Java I tried to count how often the loop in threads are executed.
So I used a
private static int total = 0;
in the main class.
I have two threads.
Thread 1: Prints System.out.println("Hello from Thread 1!");
Thread 2: Prints System.out.println("Hello from Thread 2!");
And I count the lines printed by thread 1 and thread 2. But the lines of thread 1 + lines of thread 2 don't match the total number of lines printed out.
Here is my code:
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.logging.Level;
import java.util.logging.Logger;
public class Test {
private static int total = 0;
private static int countT1 = 0;
private static int countT2 = 0;
private boolean run = true;
public Test() {
ExecutorService newCachedThreadPool = Executors.newCachedThreadPool();
newCachedThreadPool.execute(t1);
newCachedThreadPool.execute(t2);
try {
Thread.sleep(1000);
}
catch (InterruptedException ex) {
Logger.getLogger(Test.class.getName()).log(Level.SEVERE, null, ex);
}
run = false;
try {
Thread.sleep(1000);
}
catch (InterruptedException ex) {
Logger.getLogger(Test.class.getName()).log(Level.SEVERE, null, ex);
}
System.out.println((countT1 + countT2 + " == " + total));
}
private Runnable t1 = new Runnable() {
#Override
public void run() {
while (run) {
total++;
countT1++;
System.out.println("Hello #" + countT1 + " from Thread 2! Total hello: " + total);
}
}
};
private Runnable t2 = new Runnable() {
#Override
public void run() {
while (run) {
total++;
countT2++;
System.out.println("Hello #" + countT2 + " from Thread 2! Total hello: " + total);
}
}
};
public static void main(String[] args) {
new Test();
}
}
i++ is probably not atomic in Java because atomicity is a special requirement which is not present in the majority of the uses of i++. That requirement has a significant overhead: there is a large cost in making an increment operation atomic; it involves synchronization at both the software and hardware levels that need not be present in an ordinary increment.
You could make the argument that i++ should have been designed and documented as specifically performing an atomic increment, so that a non-atomic increment is performed using i = i + 1. However, this would break the "cultural compatibility" between Java, and C and C++. As well, it would take away a convenient notation which programmers familiar with C-like languages take for granted, giving it a special meaning that applies only in limited circumstances.
Basic C or C++ code like for (i = 0; i < LIMIT; i++) would translate into Java as for (i = 0; i < LIMIT; i = i + 1); because it would be inappropriate to use the atomic i++. What's worse, programmers coming from C or other C-like languages to Java would use i++ anyway, resulting in unnecessary use of atomic instructions.
Even at the machine instruction set level, an increment type operation is usually not atomic for performance reasons. In x86, a special instruction "lock prefix" must be used to make the inc instruction atomic: for the same reasons as above. If inc were always atomic, it would never be used when a non-atomic inc is required; programmers and compilers would generate code that loads, adds 1 and stores, because it would be way faster.
In some instruction set architectures, there is no atomic inc or perhaps no inc at all; to do an atomic inc on MIPS, you have to write a software loop which uses the ll and sc: load-linked, and store-conditional. Load-linked reads the word, and store-conditional stores the new value if the word has not changed, or else it fails (which is detected and causes a re-try).
i++ involves two operations :
read the current value of i
increment the value and assign it to i
When two threads perform i++ on the same variable at the same time, they may both get the same current value of i, and then increment and set it to i+1, so you'll get a single incrementation instead of two.
Example :
int i = 5;
Thread 1 : i++;
// reads value 5
Thread 2 : i++;
// reads value 5
Thread 1 : // increments i to 6
Thread 2 : // increments i to 6
// i == 6 instead of 7
Java specification
The important thing is the JLS (Java Language Specification) rather than how various implementations of the JVM may or may not have implemented a certain feature of the language.
The JLS defines the ++ postfix operator in clause 15.14.2 which says i.a. "the value 1 is added to the value of the variable and the sum is stored back into the variable". Nowhere does it mention or hint at multithreading or atomicity.
For multithreading or atomicity, the JLS provides volatile and synchronized. Additionally, there are the Atomic… classes.
Why is i++ not atomic in Java?
Let's break the increment operation into multiple statements:
Thread 1 & 2 :
Fetch value of total from memory
Add 1 to the value
Write back to the memory
If there is no synchronization then let's say Thread one has read the value 3 and incremented it to 4, but has not written it back. At this point, the context switch happens. Thread two reads the value 3, increments it and the context switch happens. Though both threads have incremented the total value, it will still be 4 - race condition.
i++ is a statement which simply involves 3 operations:
Read current value
Write new value
Store new value
These three operations are not meant to be executed in a single step or in other words i++ is not a compound operation. As a result all sorts of things can go wrong when more than one threads are involved in a single but non-compound operation.
Consider the following scenario:
Time 1:
Thread A fetches i
Thread B fetches i
Time 2:
Thread A overwrites i with a new value say -foo-
Thread B overwrites i with a new value say -bar-
Thread B stores -bar- in i
// At this time thread B seems to be more 'active'. Not only does it overwrite
// its local copy of i but also makes it in time to store -bar- back to
// 'main' memory (i)
Time 3:
Thread A attempts to store -foo- in memory effectively overwriting the -bar-
value (in i) which was just stored by thread B in Time 2.
Thread B has nothing to do here. Its work was done by Time 2. However it was
all for nothing as -bar- was eventually overwritten by another thread.
And there you have it. A race condition.
That's why i++ is not atomic. If it was, none of this would have happened and each fetch-update-store would happen atomically. That's exactly what AtomicInteger is for and in your case it would probably fit right in.
P.S.
An excellent book covering all of those issues and then some is this:
Java Concurrency in Practice
In the JVM, an increment involves a read and a write, so it's not atomic.
If the operation i++ would be atomic you wouldn't have the chance to read the value from it. This is exactly what you want to do using i++ (instead of using ++i).
For example look at the following code:
public static void main(final String[] args) {
int i = 0;
System.out.println(i++);
}
In this case we expect the output to be: 0
(because we post increment, e.g. first read, then update)
This is one of the reasons the operation can't be atomic, because you need to read the value (and do something with it) and then update the value.
The other important reason is that doing something atomically usually takes more time because of locking. It would be silly to have all the operations on primitives take a little bit longer for the rare cases when people want to have atomic operations. That is why they've added AtomicInteger and other atomic classes to the language.
There are two steps:
fetch i from memory
set i+1 to i
so it's not atomic operation.
When thread1 executes i++, and thread2 executes i++, the final value of i may be i+1.
In JVM or any VM, the i++ is equivalent to the following:
int temp = i; // 1. read
i = temp + 1; // 2. increment the value then 3. write it back
that is why i++ is non-atomic.
Concurrency (the Thread class and such) is an added feature in v1.0 of Java. i++ was added in the beta before that, and as such is it still more than likely in its (more or less) original implementation.
It is up to the programmer to synchronize variables. Check out Oracle's tutorial on this.
Edit: To clarify, i++ is a well defined procedure that predates Java, and as such the designers of Java decided to keep the original functionality of that procedure.
The ++ operator was defined in B (1969) which predates java and threading by just a tad.

Java atomic classes in compound operations

Will the following code cause race condition issue if several threads invoke the "incrementCount" method?
public class sample {
private AtomicInteger counter = new AtomicInteger(0);
public int getCurrentCount {
int current = counter.getAndIncrement();
if (counter.compareAndSet(8, 0)) current = 0;
return current;
}
}
If it causes race condition, what are the possible solution other than using synchronized keyword?
You probably don't want to let the counter exceed 8 and this won't work. There are race conditions.
It looks like you want a mod 8 counter. The easiest way is to leave the AtomicInteger alone and use something like
int current = counter.getAndIncrement() & 7;
(which is fixed and optimized version of % 8). For computations mod 8 or any other power of two it works perfectly, for other number you'd need % N and get problems with int overflowing to negative numbers.
The direct solution goes as follows
public int getCurrentCount {
while (true) {
int current = counter.get();
int next = (current+1) % 8;
if (counter.compareAndSet(current, next))) return next;
}
}
This is about how getAndIncrement() itself works, just slightly modified.
Yes, it probably does not do what you want (there is a kind of race condition).
One thread may call getAndIncrement() and receive a 8
A second thread may call getAndIncrement() and receive a 9
The first thread tries compareAndSet but the value is not 8
The second thread tries compareAndSet but the value is not 8
If there's no risk of overflowing, you could do something like
return counter.getAndIncrement() % 8;
Relying on that something does not overflow seems like a poor idea to me though, and I would probably do roughly what you do, but let the method be synchronized.
Related question: Modular increment with Java's Atomic classes
What are you trying to achieve? Even if you use the fixes proposed by ajoobe or maartinus you can end up with different threads getting the same answer - consider 20 threads running simultaneously. I don't see any interesting significance of this "counter" as you present it here - you may as well just pick a random number between 0 and 8.
Based on the code for getAndIncrement()
public int getCurrentCount() {
for(;;) {
int courrent = counter.get();
int next = current + 1;
if (next >= 8) next = 0;
if (counter.compareAndSet(current, next))
return current;
}
}
However a simpler implementation in your case is to do
public int getCurrentCount() {
return counter.getAndIncrement() & 0x7;
}
I assume that the what you want is to have a counter form 0 to 7.
If that is the case then a race condition can possibly happen and the value of counter can become 9.
Unless you are ok to use % soln. as said by others, you micht have to use synchronized.

Categories