How do I spawn threads to the maximum number possible assuming that each thread may take different time to complete. The idea is to spawn the maximum number of threads possible while not causing any to die.
E.g. While (spawnable) spawn more threads;
I am trying to spawn threads to make calls to ejb, I wish to spawn the maximum number possible to simulate a load while not causing the threads to go into out of memory exception.
Executors.newFixedThreadPool() or for finer control, create your own ThreadPoolExecutor.
There is no fixed answer. You need to tune the number of threads to your host capabilities.
In response to the memory issue, it is not only a matter of how many threads are there but also of what they do. It is not the same if they perform simple calls or have to deal with huge arrays.
Relative for performance, and supposing that your host is dedicated, a value of one thread per core is a minimum value. Given that they are going to call a remote system most of these threads will spend a time idle; depending of the proportion of idle time you can spawn more or less.
In essence, chech your host performance and tune your thread number in consequence.
The Executor framework has been cited here, and it's a wonderful tool indeed (Already +1'ed that answer).
But I believe what the OP wants is a Executors.newCachedThreadPool().
From the docs:
Creates a thread pool that creates new threads as needed, but will
reuse previously constructed threads when they are available
More on executors here
Related
I'm having a game server based on Java.
1 user need to use 2 threads to send and receive data. But whenever the thread come to 200-300 threads, the function where execute data doesnt work anymore. CPU, RAM of the server is not full, just around 15-20%.
I tried to use "garbage collector" when user disconnect, but this still happen.
Thanks for helping. Sorry with my bad English.
Your service, should ideally never be creating "too many threads".
Opt for a thread-pool using ExecutorService.
Number of threads you want to create a pool with, depends upon the kind of underlying task you have.
From a general practice:
1: For a CPU Intensive task your number of threads should be equal to
Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors());
2: For an IO Intensive task you can create more number of threads than the number of available processors as most of your threads will be waiting if the IO task is taking long.
300 Java Threads does not sound like too much (compare with default settings for application servers like Wildfly). If your application is getting stuck but neither CPU nor RAM are the bottleneck, maybe try to figure out what is happening. You may be facing threads waiting for each other to finish as well.
Thus I recommend to look at the thread dump to see where the threads might be stuck. Check out Generate a Java thread dump without restarting.
Try checking out java ExecutorService to create a thread pool with a fixed number of threads.
I am trying to implement a divide-and-conquer solution to some large data. I use fork and join to break down things into threads. However I have a question regarding the fork mechanism: if I set my divide and conquer condition as:
#Override
protected SomeClass compute(){
if (list.size()<LIMIT){
//Do something here
...
}else{
//Divide the list and invoke sub-threads
SomeRecursiveTaskClass subWorker1 = new SomeRecursiveTaskClass(list.subList());
SomeRecursiveTaskClass subWorker2 = new SomeRecursiveTaskClass(list.subList());
invokeAll(subWorker1, subWorker2);
...
}
}
What will happen if there is not enough resource to invoke subWorker (e.g. not enough thread in pool)? Does Fork/Join framework maintains a pool size for available threads? Or should I add this condition into my divide-and-conquer logic?
Each ForkJoinPool has a configured target parallelism. This isn’t exactly matching the number of threads, i.e. if a worker thread is going to wait via a ManagedBlocker, the pool may start even more threads to compensate. The parallelism of the commonPool defaults to “number of CPU cores minus one”, so when incorporating the initiating non-pool thread as helper, the resulting parallelism will utilize all CPU cores.
When you submit more jobs than threads, they will be enqueued. Enqueuing a few jobs can help utilizing the threads, as not all jobs may run exactly the same time, so threads running out of work may steal jobs from other threads, but splitting the work too much may create an unnecessary overhead.
Therefore, you may use ForkJoinTask.getSurplusQueuedTaskCount() to get the current number of pending jobs that are unlikely to be stolen by other threads and split only when it is below a small threshold. As its documentation states:
This value may be useful for heuristic decisions about whether to fork other tasks. In many usages of ForkJoinTasks, at steady state, each worker should aim to maintain a small constant surplus (for example, 3) of tasks, and to process computations locally if this threshold is exceeded.
So this is the condition to decide whether to split your jobs further. Since this number reflects when idle threads steal your created jobs, it will cause balancing when the jobs have different CPU load. Also, it works the other way round, if the pool is shared (like the common pool) and threads are already busy, they will not pick up your jobs, the surplus count will stay high and you will automatically stop splitting then.
ExecutorService.newFixedThreadPool() Is there any real time scenarios where we prefer to have a fixed set of active threads even when there is nothing to process?
In practice, having a fixed number of threads is always better than spawning a new thread every time a task has to be processed.
Threads are expensive to create and maintain, and not being able to create the number of active threads in your application, can end up actually harming the performance. Fixed thread pools reuse already created threads and this removes the thread creation overhead.
When you keep a fixed number of threads, you can predict your memory and CPU usage better, at least IMHO.
Of course, there is no recipe that fits all use cases and, before choosing what paradigm is best for your particular situation, you should do rigorous testing and measurements. Experimenting with different configurations will give you a better understanding and point you to the best solution.
I'm working in a redelivery system. This system attempt to execute an action, if the action fails, it try to execute again two times with an interval of five minutes, so I use the ExecutorService implementation to perform the first execution and ScheduledExecutorService to schedule the other ones, depending of its results (fail).
What should I consider to figure out the number of threads I need? In this moment I use only a single thread model (created by newSingleThreadScheduledExecutor method)
Without knowing details about the load your system has, environment it is using and how long does it take to process one message it is hard to say which number of threads you need. However, you can think of the following base principles:
Having many threads is bad, because you'll spend significant amount of time on a context switch, the chance of starvation and wasting system resources is higher .
Each thread consumes some space in memory for its stack. On x64 it is typically 1MB per thread.
I would probably create 2 thread pools (one scheduled, one non-scheduled) for both sending and redelivery and test them under high load varying number of threads from 2 to 10 to see which number suits best.
You should only need the one thread as only one action is running at a time. You could use a CachedThreadPool and not worry about it.
I have a multi-threaded application which creates hundreds of threads on the fly. When the JVM has less memory available than necessary to create the next Thread, it's unable to create more threads. Every thread lives for 1-3 minutes. Is there a way, if I create a thread and don't start it, the application can be made to automatically start it when it has resources, and otherwise wait until existing threads die?
You're responsible for checking your available memory before allocating more resources, if you're running close to your limit. One way to do this is to use the MemoryUsage class, or use one of:
Runtime.getRuntime().totalMemory()
Runtime.getRuntime().freeMemory()
...to see how much memory is available. To figure out how much is used, of course, you just subtract total from free. Then, in your app, simply set a MAX_MEMORY_USAGE value that, when your app has used that amount or more memory, it stops creating more threads until the amount of used memory has dropped back below this threshold. This way you're always running with the maximum number of threads, and not exceeding memory available.
Finally, instead of trying to create threads without starting them (because once you've created the Thread object, you're already taking up the memory), simply do one of the following:
Keep a queue of things that need to be done, and create a new thread for those things as memory becomes available
Use a "thread pool", let's say a max of 128 threads, as all your "workers". When a worker thread is done with a job, it simply checks the pending work queue to see if anything is waiting to be done, and if so, it removes that job from the queue and starts work.
I ran into a similar issue recently and I used the NotifyingBlockingThreadPoolExecutor solution described at this site:
http://today.java.net/pub/a/today/2008/10/23/creating-a-notifying-blocking-thread-pool-executor.html
The basic idea is that this NotifyingBlockingThreadPoolExecutor will execute tasks in parallel like the ThreadPoolExecutor, but if you try to add a task and there are no threads available, it will wait. It allowed me to keep the code with the simple "create all the tasks I need as soon as I need them" approach while avoiding huge overhead of waiting tasks instantiated all at once.
It's unclear from your question, but if you're using straight threads instead of Executors and Runnables, you should be learning about java.util.concurrent package and using that instead: http://docs.oracle.com/javase/tutorial/essential/concurrency/executors.html
Just write code to do exactly what you want. Your question describes a recipe for a solution, just implement that recipe. Also, you should give serious thought to re-architecting. You only need a thread for things you want to do concurrently and you can't usefully do hundreds of things concurrently.
This is an alternative, lower level solution Then the above mentioed NotifyingBlocking executor - it is probably not as ideal but will be simple to implement
If you want alot of threads on standby, then you ultimately need a mechanism for them to know when its okay to "come to life". This sounds like a case for semaphores.
Make sure that each thread allocates no unnecessary memory before it starts working. Then implement as follows :
1) create n threads on startup of the application, stored in a queue. You can Base this n on the result of Runtime.getMemory(...), rather than hard coding it.
2) also, creat a semaphore with n-k permits. Again, base this onthe amount of memory available.
3) now, have each of n-k threads periodically check if the semaphore has permits, calling Thread.sleep(...) in between checks, for example.
4) if a thread notices a permit, then update the semaphore, and acquire the permit.
If this satisfies your needs, you can go on to manage your threads using a more sophisticated polling or wait/lock mechanism later.