Java: Using ConcurrentHashMap as a lock manager - java

I'm writing a highly concurrent application, needing access to a large fine-grained set of shared resources. I'm currently writing a global lock manager to organize this. I'm wondering if I can piggyback off the standard ConcurrentHashMap and use that to handle the locking? I'm thinking of a system like the following:
A single global ConcurrentHashMap object contains a mapping between the unique string id of the resource, and a lock protecting that resource unique id of the thread using the resource
Tune the concurrency factor to reflect the need for a high level of concurrency
Locks are acquired using the atomic conditional replace(K key, V oldValue, V newValue) method in the hashmap
To prevent lock contention when locking multiple resources, locks must be acquired in alphabetical order
Are there any major issues with the setup? How will the performance be?
I know this is probably going to be much slower and more memory-heavy than a properly written locking system, but I'd rather not spend days trying to write my own, especially given that I probably won't be able to match Java's professionally-written concurrency code implementing the map.
Also, I've never used ConcurrentHashMap in a high-load situation, so I'm interested in the following:
How well will this scale to large numbers of elements? (I'm looking at ~1,000,000 being a good cap. If I reach beyond that I'd be willing to rewrite this more efficiently)
The documentation states that re-sizing is "relatively" slow. Just how slow is it? I'll probably have to re-size the map once every minute or so. Is this going to be problematic with the size of map I'm looking at?
Edit: Thanks Holger for pointing out that HashMaps shouldn't have that big of an issue with scaling
Also, is there is a better/more standard method out there? I can't find any places where a system like this is used, so I'm guessing that either I'm not seeing a major flaw, or theres something else.
Edit:
The application I'm writing is a network service, handling a variable number of requests. I'm using the Grizzly project to balance the requests among multiple threads.
Each request uses a small number of the shared resources (~30), so in general, I do not expect a large great deal of contention. The requests usually finish working with the resources in under 500ms. Thus, I'd be fine with a bit of blocking/continuous polling, as the requests aren't extremely time-sensitive and contention should be minimal.
In general, seeing that a proper solution would be quite similar to how ConcurrentHashMap works behind the scenes, I'm wondering if I can safely use that as a shortcut instead of writing/debugging/testing my own version.

The re-sizing issue is not relevant as you already told an estimate of the number of elements in your question. So you can give a ConcurrentHashMap an initial capacity large enough to avoid any rehashing.
The performance will not depend on the number of elements, that’s the main goal of hashing, but the number of concurrent threads.
The main problem is that you don’t have a plan of how to handle failed locks. Unless you want to poll until locking succeeds (which is not recommended) you need a way of putting a thread to sleep which implies that the thread currently owning the lock has to wake up a sleeping thread on release if one exists. So you end up requiring conventional Lock features a ConcurrentHashMap does not offer.
Creating a Lock per element (as you said ~1,000,000) would not be a solution.
A solution would look a bit like the ConcurrentHashMap works internally. Given a certain concurrency level, i.e. the number of threads you might have (rounded up), you create that number of Locks (which would be a far smaller number than 1,000,000).
Now you assign each element one of the Locks. A simple assignment would be based on the element’s hashCode, assuming it is stable. Then locking an element means locking the assigned Lock which gives you up to the configured concurrency level if all currently locked elements are mapped to different Locks.
This might imply that threads locking different elements block each other if the elements are mapped to the same Lock, but with a predictable likelihood. You can try fine-tuning the concurrency level (as said, use a number higher than the number of threads) to find the best trade-off.
A big advantage of this approach is that you do not need to maintain a data structure that depends on the number of elements. Afaik, the new parallel ClassLoader uses a similar technique.

Related

Improving performance with Distributed Counter, looking for library

We have a system with many threads, each incrementing the same counter. At the end, we need the total number of increments of all threads. Due to the size of the final result and the cost of synchronization, we suspect some performance issue with our current solution, which uses syncronized access to a single variable.
To avoid synchronization, I would like to use a Distributed Counter (correct term?), where each thread increments its own counter copy. The individual counters are summed up only once at getting the final result.
I could implement such a counter from scratch. But I guess, I'm not the first one with such a requirment. Suprisingly, a quick search did not turn up any library. Could you suggest some library or demo code? I'm looking for a simple solutions, no heavy framework.
Does your system have many different processes managing all the different threads?
if all threads are managed by the same process i don't think you need a distributed resource (counter) you can just use as suggested an AtomicInteger
Atomic means that it is thread safe and can be accessed from many threads and no data corruption will happen.
if your system does use many processes than you will need a distributed resource.
you can use any type of database in order to achieve that.
seems to me that Redis might be a good option.
or any MySql Database if you want 100% Data consistency
The solution you propose yourself is a CRDT counter. Perhaps searching for that keyword let's you find a suitable implementation.
If it is within 1 JVM process, just read thread local counters to sum them up.
If it is inter-process, memory mapped files are great for performance and only file level (or buffer level) I/O API fiddly when it comes to reading and writing.

Under what circumstances do you need to synchronize an array in Java?

Under what circumstances do you need to synchronize an array?
My thoughts are, do you need to synchronize for access? Say two threads access the array at the same time, is that going to crash?
What if one edits, while one is reading? (separate values, and the same in different circumstances)
Both editing different things?
Or is there no JVM crash like for arrays when you don't synchronize?
Under what circumstances do you need to synchronize an array?
It's sort of you either always need to or never need to. Like #EJP said, he's never done it because there's almost always a better data structure than an array, anyway (edit: there are lots of good use cases for arrays, but they're almost always used in isolation. e.g. ArrayList). But if you insist on sharing arrays between threads, array elements aren't volatile, so because of possible caching, you'll get inconsistencies and corrupt data without using synchronized.
My thoughts are, do you need to synchronize for access? Say two threads access the array at the same time, is that going to crash?
Crash, no, but your data could be inconsistent, and extra inconsistent if they're 64-bits on a 32-bit architecture.
What if one edits, while one is reading? (separate values, and the same in different circumstances)
Please don't. Wrapping your head around the Java memory model is hard enough. If you haven't established that a read or a write happened-before another read or write, the ultimate sequencing is undefined.
This is a difficult question because it touches on a lot of Concurrency topics.
First I'd start with, http://docs.oracle.com/javase/tutorial/essential/concurrency/sync.html
Threads communicate primarily by sharing access to fields and the objects reference fields refer to. This form of communication is extremely efficient, but makes two kinds of errors possible: thread interference and memory consistency errors. The tool needed to prevent these errors is synchronization.
A. Thread Interference describes how errors are introduced when multiple threads access shared data.
B. Memory Consistency Errors describes errors that result from inconsistent views of shared memory.
So to answer the main question directly, You synchronize an array when you believe that your array maybe be accessed in a way that introduces Thread interference or Memory Consistency Errors mainly.
You end up with what's called a Race Condition. Whether that crashes your application or not depends on your application.
So if you do not synchronize access to an array that is shared between multiple threads you run the chance of threads interleaving modifications to this array ( ie. Thread Interference ). Or the chance that threads read inconsistent data in your array ( ie. Memory Consistency ).
The solution is typically to synchronize the array, or us a Collection built for Concurrency, such as those discribed at https://docs.oracle.com/javase/tutorial/essential/concurrency/collections.html

When is the wrong time to use a Collections.synchronizedList vs. a List?

Other than a (minor?) performance hit, why would I use a regular List instead of a Collections.synchronizedList vs. a List?
The project I'm working on is under 10k entries, so I don't care, but if someone (maybe me) chooses to sub-class this, I need to document the behavior.
Besides performance (over 100k entries), why would I not synchronize?
That is, what penalty do I incur for using a synchronizedList? How bad is it? For my current application, it's not an issue. If it is a cheap addition, why not?
Other than a (minor?) performance hit ...
In fact, if the list is shared between threads, the performance hit of using a simple synchronized list (versus something more appropriate) could be a large performance hit, depending on what you are doing. The synchronized operations could become a concurrency bottleneck, reducing the application to the performance of a single core.
Simple "black and white" rules are not sufficient when designing a multi-threaded application ... or a reusable library that needs to be thread-safe and also performant in multi-threaded applications.
That is, what penalty do I incur for using a synchronizedList? How bad is it? For my current application, it's not an issue. If it is a cheap addition, why not?
The synchronized list class uses primitive object locking (mutexes).
If the lock is uncontended, this is cheap; maybe 5 or 10 instructions each time you acquire and release the lock. However, the ovehead may depends on whether there was previous contention on the lock. (Some locking schemes cause an object lock to be "inflated" the first time that contention occurs ...)
If the lock is contended, then it is more expensive because this will typically involve the blocked thread being de-scheduled and rescheduled ... and context switch overheads. There is another JVM-level implementation approach involving "spin locking", but that entails the blocked thread testing the lock object in a tight loop.
If the lock is held for a long time (e.g. in list.contains ... for a long list.) then that typically increases the probablility of contention.
When you don't need the synchronization, or when you aren't deluding yourself that a synchronized list is thread-safe even when iterating, which it isn't.

Why sharing a static variable between threads reduce performance?

I asked question here and someone leaved a comment saying that the problem is I'm sharing a static variable.
Why is that a problem?
Sharing a static variable of and by itself should have no adverse effect on performance. Global data is common is all programs starting with the JVM and OS constructs.
Mutable shared data is a different story as the mutation of shared data can lead to both performance issues (cache misses at the very least) and correctness issues which are a pain and are often solved using locks, which lead to potentially other performance issues.
The wiki static variable looks like a pretty substantial part of your program. Not knowing anything about what it's going or how it's coded, I would guess that it does locking in order to keep a consistent state. If most of your threads are spending their time blocking waiting to acquire access to this same object then that would explain why you're not seeing any gain from using multiple threads.
For threads to make a difference to the performance of your program they have to be reasonably independent, and not all locking on the same thing. The more locking they have to do, the less gain you will see. So try to split out the work so as much can be done independently as possible. For instance if there are work items that can be gathered independently, then you might be better off by having multiple threads go find the work items, then feed them to a queue that a dedicated thread can use to pull work items off the queue and feed them to the wiki object.

Terracotta Performance and Tips

I am just learning how to use Terracotta after discovering it about a month ago. It is a very cool technology.
Basically what I am trying to do:
My root (System of Record) is a ConcurrentHashMap.
The main Instrumented Class is a "JavaBean" with 30 or so fields that I want to exist in the HashMap.
There will be about 20000 of these JavaBeans that exist in the Hashmap.
Each bean has (at least) 5 fields that will be updated every 5 seconds.
(The reason I am using Terracotta for this is because these JavaBeans need to be accessible across JVMs and nodes.)
Anyone with more experience than me with TC have any tips? Performance is key.
Any examples other similar applications?
You might find that batching several changes under one lock scope will perform better. Each synchronized block/method forms a write transaction (assuming you use a write lock) that must be sent to the server (and possibly back out to other nodes). By changing a bunch of fields, possibly on a bunch of objects under one lock, you reduce the overhead of creating a transaction. Something to play with at least.
Partitioning is also a key way to improve performance. Changes only need to be sent to nodes that are actually using an object. So if you can partition which nodes usually touch specific objects that reduces the number of changes that have to be sent around the cluster, which improves performance.
unnutz's suggestions about using CHM or CSM are good ones. CHM allows greater concurrency (as each internal segment can be locked and used concurrently) - make sure to experiment with larger segment counts too. CSM has effectively one lock per entry so has effectively N partitions in an N-sized table. That can greatly reduce lock contention (at the cost of managing more internal lock objects). Changes coming soon for CSM will make the lock mgmt cost much lower.
Generally we find a good strategy is:
Build a performance test (should be multi-threaded and multi-node and similar to your app (or your actual app!)
Tune objects - look at your clustered object graph in the dev-console to find objects that don't need to be clustered at all - sometimes this happens accidentally (remove or cut the cluster with a transient field). Sometimes you might be clustering a Date where a long would do. Small change but that's one object per map entry and that might make a difference.
Tune locks - use the lock profiler in the dev-console to find hot locks or locks that are too narrow or too wide. The clustered stats recorder can help look at transaction size as well.
Tune GC and DGC - tune JVM garbage collection, then tune Terracotta distributed GC by turning on changing the frequency of young gen gc.
Tune TC server - lots of very detailed tunings to do here, but usually not worth it till the stuff above is tuned.
Feel free to ask on the Terracotta forums as well - all of engineering, field engineering, product mgmt watch those and answer there.
Firstly, I would suggest you to raise this question on their forums too.
Secondly, actually, performance of your application clustered over the Terracotta willl depend on number of write transactions that happen. So you could consider using ConcurrentStringMap (if your keys are Strings) or ConcurrentHashMap. Note that CSM is much more better than CHM from point of performance.
After all, POJOs are loaded lazily. That means each property is loaded on-demand.
Hope that helps.
Cheers

Categories