semaphores in java - java

Has anyone got any idea how to implement a rudimentary semaphore in java without making use of wait(), notify() or synchronize.I am not looking for a solution to this problem just a pointer in the right direction because I amd totally lost on this.

java.util.concurrent.Semaphore

I had similar homework few years ago at my university, but in C++. Java is too high level language for this kind of stuff.
Here is my implementation of signal and wait in C++, but I don't know if it is going to be helpful because you will have to implement a lot of other things.
int KernelSem::wait() {
lock();
if(--value < 0) {
PCB::running->state = PCB::BLOCKED;
PCB::running->waitingAtSem = this;
blockedQueue->put(PCB::running);
dispatch();
}
else {
PCB::running->deblockedBy = 0;
if(semPreempt) dispatch();
}
unlock();
return PCB::running->deblockedBy;
}
void KernelSem::signal() {
lock();
if(value++ < 0) {
PCB* tempPCB = blockedQueue->get();
if(tempPCB) {
tempPCB->state = PCB::READY;
tempPCB->deblockedBy = 0;
tempPCB->waitingAtSem = 0;
Scheduler::put(tempPCB);
}
}
if(semPreempt) dispatch();
unlock();
}
lock and unlock functions are just asm{cli} and asm{sti} (clear/set interrupt flag).
PCB is a process control block.
Hope it helps

in a very simple simple (again) simple way you could implement this using a simple int or boolean.
Test the int or boolean before grant acess. If it is 0 (tired of boolean), add 1 and continue. If not do Thread.yield() and try again latter. When you release, remove 1 from int and continue.
naive implementation, but works fine.

I hope that this is homework, because I cannot see any good reason you might want to do this in production code. Wikipedia has a list of algorithms for implementing semaphores in software.

Doing as proposed in the accepted answer will lead to a lot of concurrent issues as you can't ensure mutual exclusion with this. As an example, two threads asking to increment an integer would both read the boolean (that is proposed as lock) the same time, then both will think it's ok and then both set the bool to its opposite value. Both threads will go in changing stuff and when they are done they will both write a value to the (non)mutually exclusive variable and the whole purpose of the semaphore is lost. The wait() method is for waiting until something happen, and that's exactly what you want to do.
If you absolutely don't want to use wait, then implement some kind of double checking sleep technique where the thread first check the lock variable, changes it to false and sets a flag in an array or something with a special slot just for that thread to ensure that it will always succeed. Then the thread can sleep for a small interval of time and then checks the whole array for more flags to see if someone else were at it the same time. If not, it can continue, else it can't continue and have to sleep for a random amount of time before trying again (to make the threads sleep for lengths to make someone success later). If they collapse again then they will sleep for an even longer random time. This technique is also used in networks where semaphores cannot be used.
(Of course semaphores is exactly what you want to do but as it uses wait i kind of assumed you wanted something that don't use wait at all...)

Related

Wait for method to Finish, and weird interaction with System.println

I am trying to write a genetic program to play through a game, but I am running into a bit of a snag. When I call this code:
public double playMap (GameBoard gb, Player p) {
gb.playerController = p;
Game g = new Game(gb);
int initHP = 0;
for (Unit u : gb.enemy.units) {
initHP += u.maxHP;
}
g.playGame(false);
int finalHP = 0;
for (Unit u : gb.enemy.units) {
finalHP += u.currHP;
}
System.out.println(" " + initHP);
System.out.println(" " + finalHP);
System.out.println(" " + (finalHP - initHP));
if (initHP == finalHP) {
return -10;
}
return initHP - finalHP;
}
the g.playGame() line does not have time to finish, and I am getting incorrect results from the function. I can wait out unit the game is over with a
while (!g.isDone) {
System.out.println(g.isDone);
}
but not with the same while loop without a print statement. I know there has to be a more elegant solution, and I cant seem to implement the methods I have seen. Also if anyone knows why I need the print statement in the while loop to get it to wait that would be great too.
Thanks in advance.
ADDED playGame:
public void playGame(boolean visual) {
Global.visual = visual;
if (Global.visual) {
JFrame application = new JFrame();
application.setBackground(Color.DARK_GRAY);
application.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
application.add(this);
application.setSize(500, 400); // window is 500 pixels wide, 400 high
application.setVisible(true);
}
PlayerInput pi = new PlayerInput();
this.addKeyListener(pi);
final Timer timer = new Timer(10/60, null);
ActionListener listener = new ActionListener() {
#Override
public void actionPerformed(ActionEvent e) {
pi.addPressed();
if (update(pi)) {
// application.setVisible(false);
// application.dispose();
System.out.println(gb.toString());
isDone = true;
timer.stop();
}
pi.reset();
}
};
timer.addActionListener(listener);
timer.start();
while (!isDone) {
System.out.println(isDone);
}
}
First of all, this is a really bad way of doing this. This approach is called "busy waiting" and it is very inefficient.
The problem is most likely that reads and writes to g.isDone are not properly synchronized. As a consequence, there are no guarantees that the "waiting" thread will ever see the update to g.isDone that sets it to true.
There are various ways to ensure that the update is seen. The simplest one is to declare isDone as volatile. Another one is to do the reads and writes within a primitive lock.
The reason that the println() call "fixes" things is that println is doing some synchronization behind the scenes, and this is leading to serendipitous cache flushing (or something) that makes your update visible. (In other words: you got lucky, but exactly how you got lucky is hard to tie down.)
A better solution is to use another mechanism for coordinating the two threads.
You could use Thread.join() so that one thread waits for the other one to terminate (completely!).
You could use a Latch or Semaphore or similar to implement the waiting.
You could use an Executor that delivers a Future and then call Future.get() to wait for that to deliver its result.
You could even use Object.wait and Object.notify ... though that is low-level and easy to get wrong.
Without seeing the full context, it is hard to judge which approach would be most appropriate. But they would all be better than busy-waiting.
Another answer says this:
If you remove the System.out.println() call from your loop, I believe that the compiler simply doesn't include the loop in the Java bytecode, believing it to be superfluous.
As I explained above, the real problem is inadequate synchronization. To be technical, there needs to be a happens-before relationship between the write of isDone in one thread and the read of isDone in the other one. Various things will give that ... but without that, the compiler is entitled to assume that:
the writing thread does not need to flush the write to memory
the reading thread does not need to check that the memory has changed.
For example, without the happens-before, the compiler would be permitted to optimize
while (!g.isDone) {
// do nothing
}
to
if (!g.isDone) {
// do nothing
}
We don't know if this actually happens, or whether the actual cause of "non-visibility" of the update to isDone is something else. (Indeed, it could be JVM version / platform specific. To be sure, you would need to get the JIT compiler to dump the native code for the methods, and analyze the code very carefully.)
Apparently you are running your game in a separate thread. Assuming that thread is called foo, calling foo.join() will block the calling thread until foo finishes executing. You can simply replace your entire loop with foo.join().
If you remove the System.out.println() call from your loop, I believe that the compiler simply doesn't include the loop in the Java bytecode, believing it to be superfluous.

One Synchronized block compared to multiple AtomicInteger increments

I do understand that it is better to use AtomicInteger instead of synchronized block to increment a shared int value. However, would it still hold in case of multiple int values?
Which one of the below methods would be better and why? Is there a better way to do it to improve performance?
1) Using synchronized block:
int i, j, k, l;
public void synchronized incrementValues() {
i++;j++;k++;l++;
}
2) Using AtomicInteger:
AtomicInteger i,j,k,l;
// Initialize i, j, k, l
public void incrementValues() {
i.incrementAndGet();
j.incrementAndGet();
k.incrementAndGet();
l.incrementAndGet();
}
Or would it be faster if I use ReentrantLock?
3) Using ReentrantLock :
ReentrantLock lock = new ReentrantLock()
int i, j, k, l;
public void incrementValues() {
lock.lock();
try {
i++;j++;k++;l++;
} finally {
lock.unlock();
}
}
Here are my questions:
Is 3 the fastest of them all?
What about 2? For single integer 2 is faster than 1. Will 2 become slower than 1 if the number of integers increase?
Edit 1
Modified question Based on Matthias answer.
i,j,k,l are independent of each other. Individual increments should be atomic, not the whole. It is ok if thread 2 modifies i before thread 1 modifies k.
Edit 2
Additional Info based on comments so far
I am not looking for an exact answer, as I understand that it would depend on how the functions are used and the amount of contention etc. and measuring for each of the use cases is the best way to determine the exact answer. However, I would like to see people share their knowledge/articles etc. that would throw light on the parameters/optimizations affecting the situation. Thanks for the article #Marco13. It was informative.
First of all, #2 is not thread safe. incrementAndGet() is atomic, however, calling four incrementAndGet operations in a row is not. (e.g. after the second incrementAndGet, another thread could get into the same method and start doing the same like in the example below.
T1: i.incrementAndGet();
T1: j.incrementAndGet();
T1: k.incrementAndGet();
T2: i.incrementAndGet();
T2: j.incrementAndGet();
T1: l.incrementAndGet();
T2: k.incrementAndGet();
T2: l.incrementAndGet();
then, if it is between #1 and #3: If you're not into high speed stock trading, it won't matter for you. There might be really small differences (in the case of just integers probably in nanoseconds), but it won't really matter. However, I would always go for #1, as it's much simpler and also much safer to use (e.g. imagine you would have forgotten to put the unlock() in the finally block - then you could get into big trouble)
Regarding your edits:
For number 1: sometimes it could be important to atomically modify several values at once. Consider that data is not only incremented but also read at the same time. You would assume that at any point in time all variables very the same value. However as the update operation is not atomic when you read the data, it could be that I=j=k=5 and l=4 because the thread that did the increment has not yet arrived at the last operation.
Whether this is a problem depends very much on your problem. If you don't need such a guarantee, don't care.
For number 2:
Optimisation is hard and concurrency is even harder. I can only recommend NOT thinking about such micro oprimizations. In the best case these optimizations save nanoseconds but make the code very complex. In the worst case there's a false assumption or logical error in the optimisation and you will end up with concurrency problems. Most likely however your optimization will perform worse.
Also consider that the code you write will probalbly need to be maintained by someone else at a later point in time. And where you saved milliseconds in programming execution you waste hours of you processors life who is trying to understand what you want to do and why you do it this way while attempting to fix that nasty multi threading bug.
So for the sake of ease: synchronized is the best thing to use.
The kiss principle REALLY holds true for concurrency.

how to implement of semaphore

I am going through this link , here the implementation of coiunting semaphore is given as :
public class CountingSemaphore {
private int signals = 0;
public synchronized void take() {
this.signals++;
this.notify();
}
public synchronized void release() throws InterruptedException{
while(this.signals == 0) wait();
this.signals--;
}
}
i am not able to get that. in the take() method , notify is called which will make other threads to enter the section.Shouldnt there be wait inside take method. please help me understand.
Thanks
Jayendra
The first comment on that article points out:
Don't you have "take" and "release" reversed?
and the author concedes
You are right, that should probably have been reversed.
So, yes, it seems the article got things mixed up a bit.
It probably works if you just switch these two methods.
However, in real life, the JDK has semaphores now in the concurrency utils package, and you should use those.
As for learning how things work, looking at the JDK source as a first step is probably a bit challenging (but very good reading after you have reached an initial understanding). Best to find a better article.
Use java.util.concurrent.Semaphore.
A counting semaphore. Conceptually, a semaphore maintains a set of permits. Each acquire() blocks if necessary until a permit is available, and then takes it. Each release() adds a permit, potentially releasing a blocking acquirer. However, no actual permit objects are used; the Semaphore just keeps a count of the number available and acts accordingly.
Grep the code.
The method names are switched by a translation error, see the original author's comment. The code makes no sense in this form, since it will produce a deadlock: the release will only decrease the counter if it's zero, which will never happen again!
If you swap the calls, i.e. lock by calling 'release', the semaphore will work, but not count.

Writing a thread safe modular counter in Java

Full disclaimer: this is not really a homework, but I tagged it as such because it is mostly a self-learning exercise rather than actually "for work".
Let's say I want to write a simple thread safe modular counter in Java. That is, if the modulo M is 3, then the counter should cycle through 0, 1, 2, 0, 1, 2, … ad infinitum.
Here's one attempt:
import java.util.concurrent.atomic.AtomicInteger;
public class AtomicModularCounter {
private final AtomicInteger tick = new AtomicInteger();
private final int M;
public AtomicModularCounter(int M) {
this.M = M;
}
public int next() {
return modulo(tick.getAndIncrement(), M);
}
private final static int modulo(int v, int M) {
return ((v % M) + M) % M;
}
}
My analysis (which may be faulty) of this code is that since it uses AtomicInteger, it's quite thread safe even without any explicit synchronized method/block.
Unfortunately the "algorithm" itself doesn't quite "work", because when tick wraps around Integer.MAX_VALUE, next() may return the wrong value depending on the modulo M. That is:
System.out.println(Integer.MAX_VALUE + 1 == Integer.MIN_VALUE); // true
System.out.println(modulo(Integer.MAX_VALUE, 3)); // 1
System.out.println(modulo(Integer.MIN_VALUE, 3)); // 1
That is, two calls to next() will return 1, 1 when the modulo is 3 and tick wraps around.
There may also be an issue with next() getting out-of-order values, e.g.:
Thread1 calls next()
Thread2 calls next()
Thread2 completes tick.getAndIncrement(), returns x
Thread1 completes tick.getAndIncrement(), returns y = x+1 (mod M)
Here, barring the forementioned wrapping problem, x and y are indeed the two correct values to return for these two next() calls, but depending on how the counter behavior is specified, it can be argued that they're out of order. That is, we now have (Thread1, y) and (Thread2, x), but maybe it should really be specified that (Thread1, x) and (Thread2, y) is the "proper" behavior.
So by some definition of the words, AtomicModularCounter is thread-safe, but not actually atomic.
So the questions are:
Is my analysis correct? If not, then please point out any errors.
Is my last statement above using the correct terminology? If not, what is the correct statement?
If the problems mentioned above are real, then how would you fix it?
Can you fix it without using synchronized, by harnessing the atomicity of AtomicInteger?
How would you write it such that tick itself is range-controlled by the modulo and never even gets a chance to wraps over Integer.MAX_VALUE?
We can assume M is at least an order smaller than Integer.MAX_VALUE if necessary
Appendix
Here's a List analogy of the out-of-order "problem".
Thread1 calls add(first)
Thread2 calls add(second)
Now, if we have the list updated succesfully with two elements added, but second comes before first, which is at the end, is that "thread safe"?
If that is "thread safe", then what is it not? That is, if we specify that in the above scenario, first should always come before second, what is that concurrency property called? (I called it "atomicity" but I'm not sure if this is the correct terminology).
For what it's worth, what is the Collections.synchronizedList behavior with regards to this out-of-order aspect?
As far as I can see you just need a variation of the getAndIncrement() method
public final int getAndIncrement(int modulo) {
for (;;) {
int current = atomicInteger.get();
int next = (current + 1) % modulo;
if (atomicInteger.compareAndSet(current, next))
return current;
}
}
I would say that aside from the wrapping, it's fine. When two method calls are effectively simultaneous, you can't guarantee which will happen first.
The code is still atomic, because whichever actually happens first, they can't interfere with each other at all.
Basically if you have code which tries to rely on the order of simultaneous calls, you already have a race condition. Even if in the calling code one thread gets to the start of the next() call before the other, you can imagine it coming to the end of its time-slice before it gets into the next() call - allowing the second thread to get in there.
If the next() call had any other side effect - e.g. it printed out "Starting with thread (thread id)" and then returned the next value, then it wouldn't be atomic; you'd have an observable difference in behaviour. As it is, I think you're fine.
One thing to think about regarding wrapping: you can make the counter last an awful lot longer before wrapping if you use an AtomicLong :)
EDIT: I've just thought of a neat way of avoiding the wrapping problem in all realistic scenarios:
Define some large number M * 100000 (or whatever). This should be chosen to be large enough to not be hit too often (as it will reduce performance) but small enough that you can expect the "fixing" loop below to be effective before too many threads have added to the tick to cause it to wrap.
When you fetch the value with getAndIncrement(), check whether it's greater than this number. If it is, go into a "reduction loop" which would look something like this:
long tmp;
while ((tmp = tick.get()) > SAFETY_VALUE))
{
long newValue = tmp - SAFETY_VALUE;
tick.compareAndSet(tmp, newValue);
}
Basically this says, "We need to get the value back into a safe range, by decrementing some multiple of the modulus" (so that it doesn't change the value mod M). It does this in a tight loop, basically working out what the new value should be, but only making a change if nothing else has changed the value in between.
It could cause a problem in pathological conditions where you had an infinite number of threads trying to increment the value, but I think it would realistically be okay.
Concerning the atomicity problem: I don't believe that it's possible for the Counter itself to provide behaviour to guarantee the semantics you're implying.
I think we have a thread doing some work
A - get some stuff (for example receive a message)
B - prepare to call Counter
C - Enter Counter <=== counter code is now in control
D - Increment
E - return from Counter <==== just about to leave counter's control
F - application continues
The mediation you're looking for concerns the "payload" identity ordering established at A.
For example two threads each read a message - one reads X, one reads Y. You want to ensure that X gets the first counter increment, Y gets the second, even though the two threads are running simultaneously, and may be scheduled arbitarily across 1 or more CPUs.
Hence any ordering must be imposed across all the steps A-F, and enforced by some concurrency countrol outside of the Counter. For example:
pre-A - Get a lock on Counter (or other lock)
A - get some stuff (for example receive a message)
B - prepare to call Counter
C - Enter Counter <=== counter code is now in control
D - Increment
E - return from Counter <==== just about to leave counter's control
F - application continues
post- F - release lock
Now we have a guarantee at the expense of some parallelism; the threads are waiting for each other. When strict ordering is a requirement this does tend to limit concurrency; it's a common problem in messaging systems.
Concerning the List question. Thread-safety should be seen in terms of interface guarantees. There is absolute minimum requriement: the List must be resilient in the face of simultaneous access from several threads. For example, we could imagine an unsafe list that could deadlock or leave the list mis-linked so that any iteration would loop for ever. The next requirement is that we should specify behaviour when two threads access at the same time. There's lots of cases, here's a few
a). Two threads attempt to add
b). One thread adds item with key "X", another attempts to delete the item with key "X"
C). One thread is iterating while a second thread is adding
Providing that the implementation has clearly defined behaviour in each case it's thread-safe. The interesting question is what behaviours are convenient.
We can simply synchronise on the list, and hence easily give well-understood behaviour for a and b. However that comes at a cost in terms of parallelism. And I'm arguing that it had no value to do this, as you still need to synchronise at some higher level to get useful semantics. So I would have an interface spec saying "Adds happen in any order".
As for iteration - that's a hard problem, have a look at what the Java collections promise: not a lot!
This article , which discusses Java collections may be interesting.
Atomic (as I understand) refers to the fact that an intermediate state is not observable from outside. atomicInteger.incrementAndGet() is atomic, while return this.intField++; is not, in the sense that in the former, you can not observe a state in which the integer has been incremented, but has not yet being returned.
As for thread-safety, authors of Java Concurrency in Practice provide one definition in their book:
A class is thread-safe if it behaves
correctly when accessed from multiple
threads, regardless of the scheduling
or interleaving of the execution of
those threads by the runtime
environment, and with no additional
synchronization or other coordination
on the part of the calling code.
(My personal opinion follows)
Now, if we have the list
updated succesfully with two elements
added, but second comes before first,
which is at the end, is that "thread
safe"?
If thread1 entered the entry set of the mutex object (In case of Collections.synchronizedList() the list itself) before thread2, it is guaranteed that first is positioned ahead than second in the list after the update. This is because the synchronized keyword uses fair lock. Whoever sits ahead of the queue gets to do stuff first. Fair locks can be quite expensive and you can also have unfair locks in java (through the use of java.util.concurrent utilities). If you'd do that, then there is no such guarantee.
However, the java platform is not a real time computing platform, so you can't predict how long a piece of code requires to run. Which means, if you want first ahead of second, you need to ensure this explicitly in java. It is impossible to ensure this through "controlling the timing" of the call.
Now, what is thread safe or unsafe here? I think this simply depends on what needs to be done. If you just need to avoid the list being corrupted and it doesn't matter if first is first or second is first in the list, for the application to run correctly, then just avoiding the corruption is enough to establish thread-safety. If it doesn't, it is not.
So, I think thread-safety can not be defined in the absence of the particular functionality we are trying to achieve.
The famous String.hashCode() doesn't use any particular "synchronization mechanism" provided in java, but it is still thread safe because one can safely use it in their own app. without worrying about synchronization etc.
Famous String.hashCode() trick:
int hash = 0;
int hashCode(){
int hash = this.hash;
if(hash==0){
hash = this.hash = calcHash();
}
return hash;
}

Can a thread call wait() on two locks at once in Java (6)

I've just been messing around with threads in Java to get my head around them (it seems like the best way to do so) and now understand what's going on with synchronize, wait() and notify().
I'm curious about whether there's a way to wait() on two resources at once. I think the following won't quite do what I'm thinking of (edit: note that the usual while loops have been left out of this example to focus just on freeing up two resources):
synchronized(token1) {
synchronized(token2) {
token1.wait();
token2.wait(); //won't run until token1 is returned
System.out.println("I got both tokens back");
}
}
In this (very contrived) case token2 will be held until token1 is returned, then token1 will be held until token2 is returned. The goal is to release both token1 and token2, then resume when both are available (note that moving the token1.wait() outside the inner synchronized loop is not what I'm getting at).
A loop checking whether both are available might be more appropriate to achieve this behaviour (would this be getting near the idea of double-check locking?), but would use up extra resources - I'm not after a definitive solution since this is just to satisfy my curiosity.
Edit Let's just for the sake of argument say that the two tokens here represent two distinct resources that the thread must use at the same time, and that some other threads will need both at the same time.
No, not with a standaad Java lock. Although I guess you could construct such a lock.
wait should be called within a while loop (wait may spuriously wakeup, and in most situations you would want the loop anyway). So some kind of flag would make more sense.
The example doesn't include the condition upon which the waiting will be performed. Typically, waiting will occur only and until the condition has been met. Generally, I would think that one could accomplish what you are trying to by having a single lock / wait and abstracting the 'token1 and token2' into the conditional logic.
For example
synchronized(object) {
while ((!token1ConditionMet) && (!token2ConditionMet)) {
wait();
}
}

Categories