The JVM performs a neat trick called lock elision to avoid the cost of locking on objects that are only visible to one thread.
There's a good description of the trick here:
http://www.ibm.com/developerworks/java/library/j-jtp10185/
Does the .Net CLR do something similar? If not then why not?
It's neat, but is it useful? I have a hard time coming up with an example where the compiler can prove that a lock is thread local. Almost all classes don't use locking by default, and when you choose one that locks, then in most cases it will be referenced from some kind of static variable foiling the compiler optimization anyways.
Another thing is that the java vm uses escape analysis in its proof. And AFAIK .net hasn't implemented escape analysis. Other uses of escape analysis such as replacing heap allocations with stack allocations sound much more useful and should be implemented first.
IMO it's currently not worth the coding effort. There are many areas in the .net VM which are not optimized very well and have much bigger impact.
SSE vector instructions and delegate inlining are two examples from which my code would profit much more than from this optimization.
EDIT: As chibacity points out below, this is talking about making locks really cheap rather than completely eliminating them. I don't believe the JIT has the concept of "thread-local objects" although I could be mistaken... and even if it doesn't now, it might in the future of course.
EDIT: Okay, the below explanation is over-simplified, but has at least some basis in reality :) See Joe Duffy's blog post for some rather more detailed information.
I can't remember where I read this - probably "CLR via C#" or "Concurrent Programming on Windows" - but I believe that the CLR allocates sync blocks to objects lazily, only when required. When an object whose monitor has never been contested is locked, the object header is atomically updated with a compare-exchange operation to say "I'm locked". If a different thread then tries to acquire the lock, the CLR will be able to determine that it's already locked, and basically upgrade that lock to a "full" one, allocating it a sync block.
When an object has a "full" lock, locking operations are more expensive than locking and unlocking an otherwise-uncontested object.
If I'm right about this (and it's a pretty hazy memory) it should be feasible to lock and unlock a monitor on different threads cheaply, so long as the locks never overlap (i.e. there's no contention).
I'll see if I can dig up some evidence for this...
In answer to your question: No, the CLR\JIT does not perform "lock elision" optimization i.e. the CLR\JIT does not remove locks from code which is only visible to single threads. This can easily be confirmed with simple single threaded benchmarks on code where lock elision should apply as you would expect in Java.
There are likely to be a number of reasons why it does not do this, but chiefly is the fact that in the .Net framework this is likely to be an uncommonly applied optimization, so is not worth the effort of implementing.
Also in .Net uncontended locks are extremely fast due to the fact they are non-blocking and executed in user space (JVMs appear to have similar optimizations for uncontended locks e.g. IBM). To quote from C# 3.0 In A Nutshell's threading chapter
Locking is fast: you can expect to acquire and release a lock in less
than 100 nanoseconds on a 3 GHz computer if the lock is uncontended"
A couple of example scenarios where lock elision could be applied, and why it's not worth it:
Using locks within a method in your own code that acts purely on locals
There is not really a good reason to use locking in this scenario in the first place, so unlike optimizations such as hoisting loop invariants or method inling, this is a pretty uncommon case and the result of unnecessary use of locks. The runtime shouldn't be concerned with optimizing out uncommon and extreme bad usage.
Using someone else's type that is declared as a local which uses locks internally
Although this sounds more useful, the .Net framework's general design philosophy is to leave the responsibility for locking to clients, so it's rare that types have any internal lock usage. Indeed, the .Net framework is pathologically unsynchronized when it comes to instance methods on types that are not specifically designed and advertized to be concurrent. On the other hand, Java has common types that do include synchronization e.g. StringBuffer and Vector. As the .Net BCL is largely unsynchronized, lock elision is likely to have little effect.
Summary
I think overall, there are fewer cases in .Net where lock elision would kick in, because there are simply not as many places where there would be thread-local locks. Locks are far more likely to be used in places which are visible to multiple threads and therefore should not be elided. Also, uncontended locking is extremely quick.
I had difficulty finding any real-world evidence that lock elision actually provides that much of a performance benefit in Java (for example...), and the latest docs for at least the Oracle JVM state that elision is not always applied for thread local locks, hinting that it is not an always given optimization anyway.
I suspect that lock elision is something that is made possible through the introduction of escape analysis in the JVM, but is not as important for performance as EA's ability to analyze whether reference types can be allocated on the stack.
Related
This is a follow up question to
How to demonstrate Java instruction reordering problems?
There are many articles and blogs referring to Java and JVM instruction reordering which may lead to counter-intuitive results in user operations.
When I asked for a demonstration of Java instruction reordering causing unexpected results, several comments were made to the effect that a more general area of concern is memory reordering, and that it would be difficult to demonstrate on an x86 CPU.
Is instruction reordering just a part of a bigger issue of memory reordering, compiler optimizations and memory models? Are these issues really unique to the Java compiler and the JVM? Are they specific to certain CPU types?
Memory reordering is possible without compile-time reordering of operations in source vs. asm. The order of memory operations (loads and stores) to coherent shared cache (i.e. memory) done by a CPU running a thread is also separate from the order it executes those instructions in.
Executing a load is accessing cache (or the store buffer), but executing" a store in a modern CPU is separate from its value actually being visible to other cores (commit from store buffer to L1d cache). Executing a store is really just writing the address and data into the store buffer; commit isn't allowed until after the store has retired, thus is known to be non-speculative, i.e. definitely happening.
Describing memory reordering as "instruction reordering" is misleading. You can get memory reordering even on a CPU that does in-order execution of asm instructions (as long as it has some mechanisms to find memory-level parallelism and let memory operations complete out of order in some ways), even if asm instruction order matches source order. Thus that term wrongly implies that merely having plain load and store instructions in the right order (in asm) would be useful for anything related to memory order; it isn't, at least on non-x86 CPUs. It's also weird because instructions have effects on registers (at least loads, and on some ISAs with post-increment addressing modes, stores can, too).
It's convenient to talk about something like StoreLoad reordering as x = 1 "happening" after a tmp = y load, but the thing to talk about is when the effects happen (for loads) or are visible to other cores (for stores) in relation to other operations by this thread. But when writing Java or C++ source code, it makes little sense to care whether that happened at compile time or run-time, or how that source turned into one or more instructions. Also, Java source doesn't have instructions, it has statements.
Perhaps the term could make sense to describe compile-time reordering between bytecode instructions in a .class vs. JIT compiler-generate native machine code, but if so then it's a mis-use to use it for memory reordering in general, not just compile/JIT-time reordering excluding run-time reordering. It's not super helpful to highlight just compile-time reordering, unless you have signal handlers (like POSIX) or an equivalent that runs asynchronously in the context of an existing thread.
This effect is not unique to Java at all. (Although I hope this weird use of "instruction reordering" terminology is!) It's very much the same as C++ (and I think C# and Rust for example, probably most other languages that want to normally compile efficiently, and require special stuff in the source to specify when you want your memory operations ordered wrt. each other, and promptly visible to other threads). https://preshing.com/20120625/memory-ordering-at-compile-time/
C++ defines even less than Java about access to non-atomic<> variables without synchronization to ensure that there's never a write in parallel with anything else (undefined behaviour1).
And even present in assembly language, where by definition there's no reordering between source and machine code. All SMP CPUs except a few ancient ones like 80386 also do memory-reordering at run-time, so lack of instruction reordering doesn't gain you anything, especially on machines with a "weak" memory model (most modern CPUs other than x86): https://preshing.com/20120930/weak-vs-strong-memory-models/ - x86 is "strongly ordered", but not SC: it's program-order plus a store buffer with store forwarding. So if you want to actually demo the breakage from insufficient ordering in Java on x86, it's either going to be compile-time reordering or lack of sequential consistency via StoreLoad reordering or store-buffer effects. Other unsafe code like the accepted answer on your previous question that might happen to work on x86 will fail on weakly-ordered CPUs like ARM.
(Fun fact: modern x86 CPUs aggressively execute loads out of order, but check to make sure they were "allowed" to do that according to x86's strongly-ordered memory model, i.e. that the cache line they loaded from is still readable, otherwise roll back the CPU state to before that: machine_clears.memory_ordering perf event. So they maintain the illusion of obeying the strong x86 memory-ordering rules. Other ISAs have weaker orders and can just aggressively execute loads out of order without later checks.)
Some CPU memory models even allow different threads to disagree about the order of stores done by two other threads. So the C++ memory model allows that, too, so extra barriers on PowerPC are only needed for sequential consistency (atomic with memory_order_seq_cst, like Java volatile) not acquire/release or weaker orders.
Related:
How does memory reordering help processors and compilers?
How is load->store reordering possible with in-order commit? - memory reordering on in-order CPUs via other effects, like scoreboarding loads with a cache that can do hit-under-miss, and/or out-of-order commit from the store buffer, on weakly-ordered ISAs that allow this. (Also LoadStore reordering on OoO exec CPUs that still retire instructions in order, which is actually more surprising than on in-order CPUs which have special mechanisms to allow memory-level parallelism for loads, that OoO exec could replace.)
Are memory barriers needed because of cpu out of order execution or because of cache consistency problem? (basically a duplicate of this; I didn't say much there that's not here)
Are loads and stores the only instructions that gets reordered? (at runtime)
Does an x86 CPU reorder instructions? (yes)
Can a speculatively executed CPU branch contain opcodes that access RAM? - store execution order isn't even relevant for memory ordering between threads, only commit order from the store buffer to L1d cache. A store buffer is essential to decouple speculative exec (including of store instructions) from anything that's visible to other cores. (And from cache misses on those stores.)
Why is integer assignment on a naturally aligned variable atomic on x86? - true in asm, but not safe in C/C++; you need std::atomic<int> with memory_order_relaxed to get the same asm but in portably-safe way.
Globally Invisible load instructions - where does load data come from: store forwarding is possible, so it's more accurate to say x86's memory model is "program order + a store buffer with store forwarding" than to say "only StoreLoad reordering", if you ever care about this core reloading its own recent stores.
Why memory reordering is not a problem on single core/processor machines? - just like the as-if rule for compilers, out-of-order exec (and other effects) have to preserve the illusion (within one core and thus thread) of instructions fully executing one at a time, in program order, with no overlap of their effects. This is basically the cardinal rule of CPU architecture.
LWN: Who's afraid of a big bad optimizing compiler? - surprising things compilers can do to C code that uses plain (non-volatile / non-_Atomic accesses). This is mostly relevant for the Linux kernel, which rolls its own atomics with inline asm for some things like barriers, but also just C volatile for pure loads / pure stores (which is very different from Java volatile2.)
Footnote 1: C++ UB means not just an unpredictable value loaded, but that the ISO C++ standard has nothing to say about what can/can't happen in the whole program at any time before or after UB is encountered. In practice for memory ordering, the consequences are often predictable (for experts who are used to looking at compiler-generated asm) depending on the target machine and optimization level, e.g. hoisting loads out of loops breaking spin-wait loops that fail to use atomic. But of course you're totally at the mercy of whatever the compiler happens to do when your program contains UB, not at all something you can rely on.
Caches are coherent, despite common misconceptions
However, all real-world systems that Java or C++ run multiple threads across do have coherent caches; seeing stale data indefinitely in a loop is a result of compilers keeping values in registers (which are thread-private), not of CPU caches not being visible to each other. This is what makes C++ volatile work in practice for multithreading (but don't actually do that because C++11 std::atomic made it obsolete).
Effects like never seeing a flag variable change are due to compilers optimizing global variables into registers, not instruction reordering or cpu caching. You could say the compiler is "caching" a value in a register, but you can choose other wording that's less likely to confuse people that don't already understand thread-private registers vs. coherent caches.
Footnote 2: When comparing Java and C++, also note that C++ volatile doesn't guarantee anything about memory ordering, and in fact in ISO C++ it's undefined behaviour for multiple threads to be writing the same object at the same time even with volatile. Use std::memory_order_relaxed if you want inter-thread visibility without ordering wrt. surrounding code.
(Java volatile is like C++ std::atomic<T> with the default std::memory_order_seq_cst, and AFAIK Java provides no way to relax that to do more efficient atomic stores, even though most algorithms only need acquire/release semantics for their pure-loads and pure-stores, which x86 can do for free. Draining the store buffer for sequential consistency costs extra. Not much compared to inter-thread latency, but significant for per-thread throughput, and a big deal if the same thread is doing a bunch of stuff to the same data without contention from other threads.)
Whilst trying to understand how SubmissionPublisher (source code in OpenJDK 10, Javadoc), a new class added to the Java SE in version 9, has been implemented, I stumbled across a few API calls to VarHandle I wasn't previously aware of:
fullFence, acquireFence, releaseFence, loadLoadFence and storeStoreFence.
After doing some research, especially regarding the concept of memory barriers/fences (I have heard of them previously, yes; but never used them, thus was quite unfamiliar with their semantics), I think I have a basic understanding of what they are for. Nonetheless, as my questions might arise from a misconception, I want to ensure that I got it right in the first place:
Memory barriers are reordering constraints regarding reading and writing operations.
Memory barriers can be categorized into two main categories: unidirectional and bidirectional memory barriers, depending on whether they set constraints on either reads or writes or both.
C++ supports a variety of memory barriers, however, these do not match up with those provided by VarHandle. However, some of the memory barriers available in VarHandle provide ordering effects that are compatible to their corresponding C++ memory barriers.
#fullFence is compatible to atomic_thread_fence(memory_order_seq_cst)
#acquireFence is compatible to atomic_thread_fence(memory_order_acquire)
#releaseFence is compatible to atomic_thread_fence(memory_order_release)
#loadLoadFence and #storeStoreFence have no compatible C++ counter part
The word compatible seems to really important here since the semantics clearly differ when it comes to the details. For instance, all C++ barriers are bidirectional, whereas Java's barriers aren't (necessarily).
Most memory barriers also have synchronization effects. Those especially depend upon the used barrier type and previously-executed barrier instructions in other threads. As the full implications a barrier instruction has is hardware-specific, I'll stick with the higher-level (C++) barriers. In C++, for instance, changes made prior to a release barrier instruction are visible to a thread executing an acquire barrier instruction.
Are my assumptions correct? If so, my resulting questions are:
Do the memory barriers available in VarHandle cause any kind of memory synchronization?
Regardless of whether they cause memory synchronization or not, what may reordering constraints be useful for in Java? The Java Memory Model already gives some very strong guarantees regarding ordering when volatile fields, locks or VarHandle operations like #compareAndSet are involved.
In case you're looking for an example: The aforementioned BufferedSubscription, an inner class of SubmissionPublisher (source linked above), established a full fence in line 1079, function growAndAdd. However, it is unclear for me what it is there for.
This is mainly a non-answer, really (initially wanted to make it a comment, but as you can see, it's far too long). It's just that I questioned this myself a lot, did a lot of reading and research and at this point in time I can safely say: this is complicated. I even wrote multiple tests with jcstress to figure out how really they work (while looking at the assembly code generated) and while some of them somehow made sense, the subject in general is by no means easy.
The very first thing you need to understand:
The Java Language Specification (JLS) does not mention barriers, anywhere. This, for java, would be an implementation detail: it really acts in terms of happens before semantics. To be able to proper specify these according to the JMM (Java Memory Model), the JMM would have to change quite a lot.
This is work in progress.
Second, if you really want to scratch the surface here, this is the very first thing to watch. The talk is incredible. My favorite part is when Herb Sutter raises his 5 fingers and says, "This is how many people can really and correctly work with these." That should give you a hint of the complexity involved. Nevertheless, there are some trivial examples that are easy to grasp (like a counter updated by multiple threads that does not care about other memory guarantees, but only cares that it is itself incremented correctly).
Another example is when (in java) you want a volatile flag to control threads to stop/start. You know, the classical:
volatile boolean stop = false; // on thread writes, one thread reads this
If you work with java, you would know that without volatile this code is broken (you can read why double check locking is broken without it for example). But do you also know that for some people that write high performance code this is too much? volatile read/write also guarantees sequential consistency - that has some strong guarantees and some people want a weaker version of this.
A thread safe flag, but not volatile? Yes, exactly: VarHandle::set/getOpaque.
And you would question why someone might need that for example? Not everyone is interested with all the changes that are piggy-backed by a volatile.
Let's see how we will achieve this in java. First of all, such exotic things already existed in the API: AtomicInteger::lazySet. This is unspecified in the Java Memory Model and has no clear definition; still people used it (LMAX, afaik or this for more reading). IMHO, AtomicInteger::lazySet is VarHandle::releaseFence (or VarHandle::storeStoreFence).
Let's try to answer why someone needs these?
JMM has basically two ways to access a field: plain and volatile (which guarantees sequential consistency). All these methods that you mention are there to bring something in-between these two - release/acquire semantics; there are cases, I guess, where people actually need this.
An even more relaxation from release/acquire would be opaque, which I am still trying to fully understand.
Thus bottom line (your understanding is fairly correct, btw): if you plan to use this in java - they have no specification at the moment, do it on you own risk. If you do want to understand them, their C++ equivalent modes are the place to start.
Quote from book java concurrency in practice:
The performance cost of synchronization comes from several sources.
The visibility guarantees provided by synchronized and volatile may
entail using special instructions called memory barriers that can
flush or invalidate caches, flush hardware write buffers, and stall
execution pipelines. Memory barriers may also have indirect
performance consequences because they inhibit other compiler
optimizations; most operations cannot be reordered with memory
barriers. When assessing the performance impact of synchronization, it
is important to distinguish between contended and uncontended
synchronization. The synchronized mechanism is optimized for the
uncontended case (volatile is always uncontended), and at this
writing, the performance cost of a "fastͲpath" uncontended
synchronization ranges from 20 to 250 clock cycles for most systems.
Can you clarify this more clear?
What if I have huge amount threads which read volaile variable ?
Can you provide contention definition?
Is there tool to meausure contention? In which values it is measures?
Can you clarify this more clear?
That is one dense paragraph that touches a lot of topics. Which topic or topics specifically are you asking for clarification? Your question is too broad to answer satisfactorily. Sorry.
Now, if you question is specific to uncontended synchronization, it means that threads within a JVM do not have to block, get unblocked/notified and then go back to a blocked state.
Under the hood, the JVM uses hardware specific memory barriers that ensure
A volatile field is always read and written to/from main memory, not from the CPU/core cache, and
Your thread will not block/unblock to access it.
There is no contention. When you use a synchronized block OTH, all your threads are in a blocked state except one, the one reading whatever data is being protected by the synchronized block.
Let's call that thread, the one accessing the synchronized data, thread A.
Now, here is the kicker, when thread A is done with the data and exists the synchronized block, this causes the JVM to wake up all the other threads that are/were waiting for thread A to exit the synchronization block.
They all wake up (and that is expensive CPU/memory wise). And they all race trying to get a hold of the synchronization block.
Imagine a whole bunch of people trying to exit a crowded room through a tiny room. Yep, like that, that's how threads act when they try to grab a synchronization lock.
But only one gets it and gets in. All the others go back to sleep, kind of, in what is called a blocked state. This is also expensive, resource wise.
So every time one of the threads exists a synchronization block, all the other threads go crazy (best mental image I can think of) to get access to it, one gets it, and all the others go back to a blocked state.
That's what makes synchronized blocks expensive. Now, here is the caveat: It used to be very expensive pre JDK 1.4. That's 17 years ago. Java 1.4 started seeing some improvements (2003 IIRC).
Then Java 1.5 introduced even greater improvements in 2005, 12 years ago, which made synchronized blocks less expensive.
It is important to keep such things in mind. There is a lot of outdated information out there.
What if I have huge amount threads which read volaile variable ?
It wouldn't matter that much in terms of correctness. A volatile field will always show a consistent value regardless of the number of threads.
Now, if you have a very large number of threads, performance can suffer because of context switches, memory utilization, etc (and not necessarily and/or primarily because of accessing a volatile field.
Can you provide contention definition?
Please don't take it the wrong way, but if you are asking that question, I'm afraid you are not fully prepared to use a book like the one you are reading.
You will need a more basic introduction to concurrency, and contention specifically.
https://en.wikipedia.org/wiki/Resource_contention
Best regards.
I keep reading about how biased locking, using the flag -XX:+UseBiasedLocking, can improve the performance of un-contended synchronization. I couldn't find a reference to what it does and how it improves the performance.
Can anyone explain me what exactly it is or may be point me to some links/resources that explains??
Essentially, if your objects are locked only by one thread, the JVM can make an optimization and "bias" that object to that thread in such a way that subsequent atomic operations on the object incurs no synchronization cost. I suppose this is typically geared towards overly conservative code that performs locks on objects without ever exposing them to another thread. The actual synchronization overhead will only kick in once another thread tries to obtain a lock on the object.
It is on by default in Java 6.
-XX:+UseBiasedLocking
Enables a technique for improving the performance of uncontended synchronization. An object is "biased" toward the thread which first acquires its monitor via a monitorenter bytecode or synchronized method invocation; subsequent monitor-related operations performed by that thread are relatively much faster on multiprocessor machines. Some applications with significant amounts of uncontended synchronization may attain significant speedups with this flag enabled; some applications with certain patterns of locking may see slowdowns, though attempts have been made to minimize the negative impact.
Does this not answer your questions?
http://www.oracle.com/technetwork/java/tuning-139912.html#section4.2.5
Enables a technique for improving the performance of uncontended
synchronization. An object is "biased" toward the thread which first
acquires its monitor via a monitorenter bytecode or synchronized
method invocation; subsequent monitor-related operations performed by
that thread are relatively much faster on multiprocessor machines.
Some applications with significant amounts of uncontended
synchronization may attain significant speedups with this flag
enabled; some applications with certain patterns of locking may see
slowdowns, though attempts have been made to minimize the negative
impact.
Though I think you'll find it's on by default in 1.6. Use the PrintFlagsFinal diagnostic option to see what the effective flags are. Make sure you specify -server if you're investigating for a server application because the flags can differ:
http://www.jroller.com/ethdsy/entry/print_all_jvm_flags
I've been wondering about biased locks myself.
However it seems that java's biased locks are slower on intel's nehalem processors than normal locks, and presumably on the two generations of processors since nehalem. See http://mechanical-sympathy.blogspot.com/2011/11/java-lock-implementations.html
and here http://www.azulsystems.com/blog/cliff/2011-11-16-a-short-conversation-on-biased-locking
Also more information here https://blogs.oracle.com/dave/entry/biased_locking_in_hotspot
I've been hoping that there is some relatively cheap way to revoke a biased lock on intel, but I'm beginning to believe that isn't possible. The articles I've seen on how it's done rely on either:
using the os to stop the thread
sending a signal, ie running code in the other thread
having safe points that are guaranteed to run fairly often in the other thread and waiting for one to be executed (which is what java does).
having similar safe points that are a call to a return - and the other thread MODIFIES THE CODE to a breakpoint...
Worth mentioning that biased locking will be disabled by default jdk15 onwards
JEP 374 : Disable and Deprecate Biased Locking
The performance gains seen in the past are far less evident today. Many applications that benefited from biased locking are older, legacy applications that use the early Java collection APIs, which synchronize on every access (e.g., Hashtable and Vector). Newer applications generally use the non-synchronized collections (e.g., HashMap and ArrayList), introduced in Java 1.2 for single-threaded scenarios, or the even more-performant concurrent data structures, introduced in Java 5, for multi-threaded scenarios.
Further
Biased locking introduced a lot of complex code into the synchronization subsystem and is invasive to other HotSpot components as well. This complexity is a barrier to understanding various parts of the code and an impediment to making significant design changes within the synchronization subsystem. To that end we would like to disable, deprecate, and eventually remove support for biased locking.
And ya, no more System.identityHashCode(o) magic ;)
Two paper here:
https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/f4a5b21d-66fa-4885-92bf-c4e81c06d916/File/ccd39237cd4dc109d91786762fba41f0/qrl_oplocks_biasedlocking.pdf
https://www.oracle.com/technetwork/java/biasedlocking-oopsla2006-wp-149958.pdf
web page too:
https://blogs.oracle.com/dave/biased-locking-in-hotspot
jvm hotspot source code:
http://hg.openjdk.java.net/jdk8/jdk8/hotspot/file/87ee5ee27509/src/share/vm/oops/markOop.hpp
How do the Boost Thread libraries compare against the java.util.concurrent libraries?
Performance is critical and so I would prefer to stay with C++ (although Java is a lot faster these days). Given that I have to code in C++, what libraries exist to make threading easy and less error prone.
I have heard recently that as of JDK 1.5, the Java memory model was changed to fix some concurrency issues. How about C++? The last time I did multithreaded programming in C++ was 3-4 years ago when I used pthreads. Although, I don't wish to use that anymore for a large project. The only other alternative that I know of is Boost Threads. However, I am not sure if it is good. I've heard good things about java.util.concurrent, but nothing yet about Boost threads.
java.util.concurrent and boost threads library have an overlapping functionality, but java.util.concurrent also provide a) higher-level abstractions and b) also lower level functions.
Boost threads provide:
Thread (Java: java.util.Thread)
Locking (Java: java.lang.Object and java.util.concurrent.locks)
Condition Variables (Java. java.lang.Object and java.util.concurrent)
Barrier (Java: Barrier)
java.util.concurrent has also:
Semaphores
Reader-writer locks
Concurrent data structures, e.g. a BlockingQueue or a concurrent lock-free hash map.
the Executor services as a highly flexible consumer producer system.
Atomic operations.
A side note: C++ has currently no memory model. On a different machine the same C++ application may have to deal with a different memory model. This makes portable, concurrent programming in C++ even more tricky.
Boost threads are a lot easier to use than pthreads, and, in my opinion, slightly easier to use than Java threads. When a boost thread object is instantiated, it launches a new thread. The user supplies a function or function object which will run in that new thread.
It's really as simple as:
boost::thread* thr = new boost::thread(MyFunc());
thr->join();
You can easily pass data to the thread by storing values inside the function object. And in the latest version of boost, you can pass a variable number of arguments to the thread constructor itself, which will then be passed to your function object's () operator.
You can also use RAII-style locks with boost::mutex for synchronization.
Note that C++0x will use the same syntax for std::thread.
Performance wise I wouldn't really worry. It is my gut feeling that a boost/c++ expert could write faster code than a java expert. But any advantages would have to fought for.
I prefer Boost's design paradigms to Java's. Java is OO all the way, where Boost/C++ allows for OO if you like but uses the most useful paradigm for the problem at hand. In particular I love RAII when dealing with locks. Java handles memory management beautifully, but sometimes it feels like the rest of the programmers' resources get shafted: file handles, mutexes, DB, sockets, etc.
Java's concurrent library is more extensive than Boost's. Thread pools, concurrent containers, atomics, etc. But the core primitives are on par with each other, threads, mutexes, condition variables.
So for performance I'd say it's a wash. If you need lots of high level concurrent library support Java wins. If you prefer paradigm freedom C++.
If performance is an issue in your multithreaded program, then you should consider a lock-free design.
Lock-free means threads do not compete for a shared resource and that minimizes switching costs. In that department, Java has a better story IMHO, with its concurrent collections. You can rapidly come up with a lock-free solution.
For having used the Boost Thread lib a bit (but not extensively), I can say that your thinking will be influenced by what's available, and that means essentially a locking solution.
Writing a lock-free C++ solution is very difficult, because of lack of library support and also conceptually because it's missing a memory model that guarantees you can write truly immutable objects.
this book is a must read: Java Concurrency in Practice
If you're targeting a specific platform then the direct OS call will probably be a little faster than using boost for C++. I would tend to use ACE, since you can generally make the right calls for your main platform and it will still be platform-independent. Java should be about the same speed so long as you can guarantee that it will be running on a recent version.
In C++ one can directly use pthreads (pthread_create() etc) if one wanted to. Internally Java uses pthreads via its run-time environment. Do "ldd " to see.