Akka actor message needs memory pool - java

I a new in java. I'm c++ programmer and nowadays study java for 2 months.
Sorry for my pool English.
I have a question that if it needs memory pool or object pool for Akka actor model. I think if i send some message from one actor to one of the other actors, i have to allocate some heap memory(just like new Some String, or new Some BigInteger and other more..) And times on, the garbage collector will be got started(I'm not sure if it would be started) and it makes my application calculate slowly.
So I search for the way to make the memory-pool and failed(Java not supported memory pool). And I Could Make the object pool but in others project i did not find anybody use the object-pool with actor(also in Akka Homepage).
Is there any documents bout this topic in the akka hompage? Plz tell me the link or tell me the solution of my question.
Thanks.

If, as it's likely you will, you are using Akka across multiple computers, messages are serialized on the wire and sent to the other instance. This means that simply a local memory pool won't suffice.
While it's technically possible that you write a custom JSerializer (see the doc here) implementation that stores local messages in a memory pool after deserializing them, I feel that's a bit of an overkill for most applications (and easy to cock-up and actually worsen performance with lookup times in the map)
Yes, when the GC kicks in, the app will lag a bit under heavy loads. But in 95% of the scenarios, especially under a performant framework like Akka, GC will not be your bottleneck: IO will.
I'm not saying you shouldn't do it. I'm saying that before you take on the task, given its non-triviality, you should measure the impact of GC on your app at runtime with things like Kamon or other Akka-specialized monitoring solutions, and only after you are sure it's worth it you can go for it.

Using an ArrayBlockingQueue to hold a pool of your objects should help,
Here is the example code.
TO create a pool and insert an instance of pooled object in it.
BlockingQueue<YOURCLASS> queue = new ArrayBlockingQueue<YOURCLASS>(256);//Adjust 256 to your desired count. ArrayBlockingQueues size cannot be adjusted once it is initialized.
queue.put(YOUROBJ); //This should be in your code that instanciates the pool
and later where you need it (in your actor that receives message)
YOURCLASS instanceName = queue.take();
You might have to write some code around this to create and manage the pool.
But this is the gist of it.

One can do object pooling to minimise long tail of latency (by sacrifice of median in multythreaded environment). consider using appropriate queues e.g. from JCTools, Distruptor, or Agrona. Don't forget about rules of engagement for state exhange via mutable state using multiple thereads in stored objects - https://youtu.be/nhYIEqt-jvY (the best content I was able to find).
Again, don't expect to improve throughout using such slightly dangerous techniques. You will loose L1-L3 cache efficiency and will polite PCI with barriers.
Bit of tangent (to get sense of low latency technology):
One may consider some GC implementation with lower latency if you want to stick with Akka, or use custom reactive model where object pool is used by single thread, or memory copied over e.g. Distrupptor's approach.
Another alternative is using memory regions (the way Erlang VM works). It creates garbage, but in form easy to handle by GC!
If you go to very low latency IO and are the biggest enemy of latency - forget legacy TCP (vs RDMA over Infininiband), switches (over swichless), OS accessing disk via OS calls and file system (use RDMA), forget interrupts shared by same core, not pinned cores (and without spinning for input) to real CPU core (vs virtual/hyperthreads) or inter NUMA communication or messages one by one instead of hardware multicast (or better optical switch) for multiple consumers and don't forget turning Epsilon GC for JVM ;)

Related

Scheduling memory-bound tasks in java

Suppose I have a large batch of memory-bound tasks that are quite independent of one another. To make things concrete, let's say I can allocate 30GB for the heap and that each task requires on average about 3GB of memory at its peak, but with some variability both over time and from task to task. A few tasks here and there might even require 6GB.
In this case, it seems more efficient to try to run 10 (or arguably even more) tasks concurrently, and if / when we bump into the memory limit have the task wait, much the same as we do with other shared resources like I/O, specific memory addresses (which are accessed through locks), etc.
Is it possible do this in Java? More generally
What's the best way to handle memory-bound task scheduling in Java?
Some Related Questions and "Close Misses"
This question asks whether it's possible to have threads in java wait for memory instead of throwing an OOM exception, but the answers seem to focus on why this is a bad idea to begin with - perhaps because the question suggests the number of threads is unreasonable. Also, I guess treating all memory requests as equal can lead to deadlocks. So I want to emphasize that here we are talking about only about 10 tasks, and the desire to "max out" the memory usage seems like a very natural one. I do not mind wrapping my tasks by some suitable logic that will distinguish their memory requests as having lower priority. I can even accept a solution where I need to identify the class whose instances are filling up the memory and maybe add some suitable counter - but I'd prefer a platform-independent solution that works "out of the box", if there is one.
This question also also asks about scheduling memory-bound tasks but seems to presuppose a specific solution framework.
The problem is that within a single JVM you have very little control on how much memory a single thread is going to use; unless you make use of offheap (e.g. using Unsafe or direct memory as AnatolyG already mentioned). If you have huge array allocations, you could also control these. But we need to know more about the data-structures that consume the most memory.
But if you have orbitrary object graphs you don't have much control over, perhaps it smarter to model the problem using multiple processes. You have 1 intake controller process and then a bunch of worker processes. And on each process you can configure the maximum amount of heap a JVM is allowed to use.
Bumping into memory limits on OS level can be a huge PITA because it could lead to swapping and this will makes all the threads in a system slow. Or even worse, OOM-killer. Make sure you set the vm.swappiness to a very low value to prevent premature swapping.
Do you know up front how much memory a process is going to consume? If so, then you could keep track of the maximum amount of memory being consumed in the system and don't allow for new tasks in the system before tasks have completed.
If you don't know up front the memory limits, then you could assume each tasks will use the maximum, but this can lead to under-utilization of memory.

What does Thread Affinity mean?

Somewhere I have heard about Thread Affinity and Thread Affinity Executor. But I cannot find a proper reference for it at least in java. Can someone please explain to me what is it all about?
There are two issues. First, it’s preferable that threads have an affinity to a certain CPU (core) to make the most of their CPU-local caches. This must be handled by the operating system. This CPU affinity for threads is often also called “thread affinity”. In case of Java, there is no standard API to get control over this. But there are 3rd party libraries, as mentioned by other answers.
Second, in Java there is the observation that in typical programs objects are thread-affine, i.e. typically used by only one thread most of the time. So it’s the task of the JVM’s optimizer to ensure, that objects affine to one thread are placed close to each other in memory to fit into one CPU’s cache but place objects affine to different threads not too close to each other to avoid that they share a cache line as otherwise two CPUs/Cores have to synchronize them too often.
The ideal situation is that a CPU can work on some objects independently to another CPU working on other objects placed in an unrelated memory region.
Practical examples of optimizations considering Thread Affinity of Java objects are
Thread-Local Allocation Buffers (TLABs)
With TLABs, each object starts its lifetime in a memory region dedicated to the thread which created it. According to the main hypothesis behind generational garbage collectors (“the majority of all objects will die young”), most objects will spent their entire lifetime in such a thread local buffer.
Biased Locking
With Biased Locking, JVMs will perform locking operations with the optimistic assumption that the object will be locked by the same thread only, switching to a more expensive locking implementation only when this assumption does not hold.
#Contended
To address the other end, fields which are known to be accessed by multiple threads, HotSpot/OpenJDK has an annotation, currently not part of a public API, to mark them, to direct the JVM to move these data away from the other, potentially unshared data.
Let me try explaining it. With the rise of multicore processors, message passing between threads & thread pooling, scheduling has become more costlier affair. Why this has become much heavier than before, for that we need to understand the concept of "mechanical sympathy". For details you can go through a blog on it. But in crude words, when threads are distributed across different cores of a processor, when they try to exchange messages; cache miss probability is high. Now coming to your specific question, thread affinity being able to assign specific threads to a particular processor/core. Here is one of the library for java that can be used for it.
The Java Thread Affinity version 1.4 library attempts to get the best of both worlds, by allowing you to reserve a logical thread for critical threads, and reserve a whole core for the most performance sensitive threads. Less critical threads will still run with the benefits of hyper threading. e.g. following code snippet
AffinityLock al = AffinityLock.acquireLock();
try {
// find a cpu on a different socket, otherwise a different core.
AffinityLock readerLock = al.acquireLock(DIFFERENT_SOCKET, DIFFERENT_CORE);
new Thread(new SleepRunnable(readerLock, false), "reader").start();
// find a cpu on the same core, or the same socket, or any free cpu.
AffinityLock writerLock = readerLock.acquireLock(SAME_CORE, SAME_SOCKET, ANY);
new Thread(new SleepRunnable(writerLock, false), "writer").start();
Thread.sleep(200);
} finally {
al.release();
}
// allocate a whole core to the engine so it doesn't have to compete for resources.
al = AffinityLock.acquireCore(false);
new Thread(new SleepRunnable(al, true), "engine").start();
Thread.sleep(200);
System.out.println("\nThe assignment of CPUs is\n" + AffinityLock.dumpLocks());
Thread affinity (or process affinity) describes on which processor cores the thread/process is allowed to run. Normally, this setting is equal to the (logical) CPUs in your system, and there's hardly a reason for changing this, because the operating system then has the best possibilities to schedule your tasks among the available processors.
See i.e. http://msdn.microsoft.com/en-us/library/windows/desktop/ms683213(v=vs.85).aspx for how this works in windows. I don't know whether java offers an API to set these.

Java allocation : allocating objects from a pre-existing/allocated pool

In a Java program when it is necessary to allocate thousands of similar-size objects, it would be better (in my mind) to have a "pool" (which is a single allocation) with reserved items that can be pulled from when needed. This single large allocation wouldn't fragment the heap as much as thousands of smaller allocations.
Obviously, there isn't a way to specifically point an object reference to an address in memory (for its member fields) to set up a pool. Even if the new object referenced an area of the pool, the object itself would still need to be allocated. How would you handle many allocations like this without resorting to native OS libraries?
You could try using the Commons Pool library.
That said, unless I had proof the JVM wasn't doing what I needed, I'd probably hold off on optimizing object creation.
Don't worry about it. Unless you have done a lot of testing and analysis on the actual code being run and know that it is a problem with garbage collection and that the JVM isn't doing a good enough job, spend your time elsewhere.
If you are building an application, where a predictable response time is very important, then pooling of objects, no matter how small they are will pay you dividends. Again, pooling is also a factor of how big of a data set you are trying to pool and how much physical memory your machine has.
There is ample proof on the web that shows that object pooling, no matter how small the objects are, is beneficial for application performance.
There are two levels of pooling you could do:
Pooling of the basic objects such as Vectors, which you retrieve from the pool each time you have to use the vector to form a map or such.
Have the higher level composite objects pooled, which are most commonly used, pooled.
This is generally an application design decision.
Also, in a multi-threaded application, you would like to be sensitive about how many different threads are going to be allocating and returning to the pool. You certainly do not want your application to be bogged down by contention - especially if you are dealing with thousands of objects at the same time.
#Dave and Casey, you don't need any proof to show that contiguous memory layout improves Cache efficiency, which is the major bottleneck in most OOP apps that need high performance but follow a "too idealistic" OOP-design trajectory.
People often think of the GC as the culprit causing low performance in high performance Java applications and after fixing it, just leave it at that, without actually profiling memory-behavior of the application. Note though that un-cached memory instructions are inherently more expensive than arithmetic instructions (and are getting more and more expensive due to the memory access <-> computation gap). So if you care about performance, you should certainly care about memory management.
Cache-aware, or more general, data-oriented programming, is the key to achieving high performance in many kinds of applications, such as games, or mobile apps (to reduce power consumption).
Here is a SO thread on DOP.
Here is a slideshow from the Sony R&D department that shows the usefulness of DOP as applied to a playstation game (high performance required).
So how to solve the problem that Java, does not, in general allow you to allocate a chunk of memory? My guess is that when the program is just starting, you can assume that there is very little internal fragmentation in the already allocated pages. If you now have a loop that allocates thousands or millions of objects, they will probably all be as contiguous as possible. Note that you only need to make sure that consecutive objects stretch out over the same cacheline, which in many modern systems, is only 64 bytes. Also, take a look at the DOP slides, if you really care about the (memory-) performance of your application.
In short: Always allocate multiple objects at once (increase temporal locality of allocation), and, if your GC has defragmentation, run it beforehand, else try to reduce such allocations to the beginning of your program.
I hope, this is of some help,
-Domi
PS: #Dave, the commons pool library does not allocate objects contiguously. It only keeps track of the allocations by putting them into a reference array, embedded in a stack, linked list, or similar.

Object Pooling in Java

What are the pro's and con's of maintaining a pool of frequently used objects and grab one from the pool instead of creating a new one. Something like string interning except that it will be possible for all class objects.
For example it can be considered to be good since it saves gc time and object creation time. On the other hand it can be a synchronization bottleneck if used from multiple threads, demands explicit deallocation and introduces possibility of memory leaks. By tying up memory that could be reclaimed, it places additional pressure on the garbage collector.
First law of optimization: don't do it. Second law: don't do it unless you actually have measured and know for a fact that you need to optimize and where.
Only if objects are really expensive to create, and if they can actually be reused (you can reset the state with only public operations to something that can be reused) it can be effective.
The two gains you mention are not really true: memory allocation in java is free (the cost was close to 10 cpu instructions, which is nothing). So reducing the creation of objects only saves you the time spent in the constructor. This can be a gain with really heavy objects that can be reused (database connections, threads) without changing: you reuse the same connection, the same thread.
GC time is not reduced. In fact it can be worse. With moving generational GCs (Java is, or was up to 1.5) the cost of a GC run is determined by the number of alive objects, not by the released memory. Alive objects will be moved to another space in memory (this is what makes memory allocation so fast: free memory is contiguous inside each GC block) a couple of times before being marked as old and moved into the older generation memory space.
Programming languages and support, as GC, were designed keeping in mind the common usage. If you steer away from the common usage in many cases you may end up with harder to read code that is less efficient.
Unless the object is expensive to create, I wouldn't bother.
Benefits:
Fewer objects created - if object creation is expensive, this can be significant. (The canonical example is probably database connections, where "creation" includes making a network connection to the server, providing authentication etc.)
Downsides:
More complicated code
Shared resource = locking; potential bottleneck
Violates GC's expectations of object lifetimes (most objects will be shortlived)
Do you have an actual problem you're trying to solve, or is this speculative? I wouldn't think about doing something like this unless you've got benchmarks/profile runs showing that there's a problem.
Pooling will mean that you, typically, cannot make objects immutable. This leads to defencive copying so you ultimately wind up making many more copies than you would if you just made a new immutable object.
Immutability is not always desirable, but more often than not you will find that things can be immutable. Making them not immutable so that you can reuse them in a pool is probably not a great idea.
So, unless you know for certain that it is an issue don't bother. Make the code clear and easy to follow and odds are it will be fast enough. If it isn't then the fact that the code is clear and easy to follow will make it easier to speed it up (in general).
Don't.
This is 2001 thinking. The only object "pool" that is still worth anything now a days is a singleton. I use singletons only to reduce the object creation for purposes of profiling (so I can see more clearly what is impacting the code).
Anything else you are just fragmenting memory for no good purpose.
Go ahead and run a profile on creating a 1,000,000 objects. It is insignificant.
Old article here.
It entirely depends on how expensive your objects are to create, compared to the number of times you create them... for instance, objects that are just glorified structs (e.g. contain only a couple of fields, and no methods other than accessors) can be a real use case for pooling.
A real life example: I needed to repetitively extract the n highest ranked items (integers) from a process generating a great number of integer/rank pairs. I used a "pair" object (an integer, and a float rank value) in a bounded priority queue. Reusing the pairs, versus emptying the queue, throwing the pairs away, and recreating them, yielded a 20% performance improvement... mainly in the GC charge, because the pairs never needed to be reallocated throughout the entire life of the JVM.
Object pools are generally only a good idea for expensive object like database connections. Up to Java 1.4.2, object pools could improve performance but as of Java 5.0 object pools where more likely to harm performance than help and often object pools were removed to improve performances (and simplicity)
I agree with Jon Skeet's points, if you don't have a specific reason to create a pool of objects, I wouldn't bother.
There are some situations when a pool is really helpful/necessary though. If you have a resource that is expensive to create, but can be reused (such as a database connection), it might make sense to use a pool. Also, in the case of database connections, a pool is useful for preventing your apps from opening too many concurrent connections to the database.

Terracotta Performance and Tips

I am just learning how to use Terracotta after discovering it about a month ago. It is a very cool technology.
Basically what I am trying to do:
My root (System of Record) is a ConcurrentHashMap.
The main Instrumented Class is a "JavaBean" with 30 or so fields that I want to exist in the HashMap.
There will be about 20000 of these JavaBeans that exist in the Hashmap.
Each bean has (at least) 5 fields that will be updated every 5 seconds.
(The reason I am using Terracotta for this is because these JavaBeans need to be accessible across JVMs and nodes.)
Anyone with more experience than me with TC have any tips? Performance is key.
Any examples other similar applications?
You might find that batching several changes under one lock scope will perform better. Each synchronized block/method forms a write transaction (assuming you use a write lock) that must be sent to the server (and possibly back out to other nodes). By changing a bunch of fields, possibly on a bunch of objects under one lock, you reduce the overhead of creating a transaction. Something to play with at least.
Partitioning is also a key way to improve performance. Changes only need to be sent to nodes that are actually using an object. So if you can partition which nodes usually touch specific objects that reduces the number of changes that have to be sent around the cluster, which improves performance.
unnutz's suggestions about using CHM or CSM are good ones. CHM allows greater concurrency (as each internal segment can be locked and used concurrently) - make sure to experiment with larger segment counts too. CSM has effectively one lock per entry so has effectively N partitions in an N-sized table. That can greatly reduce lock contention (at the cost of managing more internal lock objects). Changes coming soon for CSM will make the lock mgmt cost much lower.
Generally we find a good strategy is:
Build a performance test (should be multi-threaded and multi-node and similar to your app (or your actual app!)
Tune objects - look at your clustered object graph in the dev-console to find objects that don't need to be clustered at all - sometimes this happens accidentally (remove or cut the cluster with a transient field). Sometimes you might be clustering a Date where a long would do. Small change but that's one object per map entry and that might make a difference.
Tune locks - use the lock profiler in the dev-console to find hot locks or locks that are too narrow or too wide. The clustered stats recorder can help look at transaction size as well.
Tune GC and DGC - tune JVM garbage collection, then tune Terracotta distributed GC by turning on changing the frequency of young gen gc.
Tune TC server - lots of very detailed tunings to do here, but usually not worth it till the stuff above is tuned.
Feel free to ask on the Terracotta forums as well - all of engineering, field engineering, product mgmt watch those and answer there.
Firstly, I would suggest you to raise this question on their forums too.
Secondly, actually, performance of your application clustered over the Terracotta willl depend on number of write transactions that happen. So you could consider using ConcurrentStringMap (if your keys are Strings) or ConcurrentHashMap. Note that CSM is much more better than CHM from point of performance.
After all, POJOs are loaded lazily. That means each property is loaded on-demand.
Hope that helps.
Cheers

Categories