The docs for ScheduledThreadPoolExecutor says that -
Tasks scheduled for exactly the same execution time are enabled in first-in-first-out (FIFO) order of submission.
Does this mean that the tasks which SHOULD be done at the same time are never done at the same time. Instead they are executed in FIFO order ?
If that is true then which class do I use which is better than Timer and also does not have this FIFO problem ?
The way a ScheduledThreadPoolExecutor works is there is a single "scheduling" or master thread which checks for tasks to execute.
If it finds a task, it delegates it to a "worker" thread from the pool.
If multiple tasks are ready to be executed, they are "kicked off" one at a time, though once "kicked off", subsequent processing is concurrent, per Java's definition.
If you have two tasks that are both scheduled through the executor for the same time, the order in which they complete could vary from run to run and unless you put in specific controls such as locks, waits, etc... to handle this, it's up to java's thread scheduling (how java allots time to threads on a core) to determine how and when what gets processed. Please note that setting up such locks, waits, etc... is a deceptively complex task prone to race conditions leading to unexpected deadlocks, live locks, etc...
It depends on the size of your thread pool. If you schedule 1000 tasks to fire at midnight, and you only have 25 threads, then only 25 can be executed initially, while the rest must wait for available threads. FIFO here refers to the order in which the executor will hand tasks off to the execution threads.
Please note that the docs talk about "enabling" the tasks and that we are talking about a threadpool executor. :-)
That means the tasks will wait until the designated time, then they are treated as if put into a normal ThreadPoolExecutor. If there are enough threads available in the pool all these tasks will be run in parallel.
Only if you have more tasks becoming active than available threads in the pool some tasks will have to wait.
Related
In my Java application I have a Runnable such as:
this.runner = new Runnable({
#Override
public void run() {
// do something that takes roughly 5 seconds.
}
});
I need to run this roughly every 30 seconds (although this can vary) in a separate thread. The nature of the code is such that I can run it and forget about it (whether it succeeds or fails). I do this as follows as a single line of code in my application:
(new Thread(this.runner)).start()
Now, this works fine. However, I'm wondering if there is any sort of cleanup I should be doing on each of the thread instances after they finish running? I am doing CPU profiling of this application in VisualVM and I can see that, over the course of 1 hour runtime, a lot of threads are being created. Is this concern valid or is everything OK?
N.B. The reason I start a new Thread instead of simply defining this.runner as a Thread, is that I sometimes need to run this.runner twice simultaneously (before the first run call has finished), and I can't do that if I defined this.runner as a Thread since a single Thread object can only be run again once the initial execution has finished.
Java objects that need to be "cleaned up" or "closed" after use conventionally implement the AutoCloseable interface. This makes it easy to do the clean up using try-with-resources. The Thread class does not implement AutoCloseable, and has no "close" or "dispose" method. So, you do not need to do any explicit clean up.
However
(new Thread(this.runner)).start()
is not guaranteed to immediately start computation of the Runnable. You might not care whether it succeeds or fails, but I guess you do care whether it runs at all. And you might want to limit the number of these tasks running concurrently. You might want only one to run at once, for example. So you might want to join() the thread (or, perhaps, join with a timeout). Joining the thread will ensure that the thread will completes its computation. Joining the thread with a timeout increases the chance that the thread starts its computation (because the current thread will be suspended, freeing a CPU that might run the other thread).
However, creating multiple threads to perform regular or frequent tasks is not recommended. You should instead submit tasks to a thread pool. That will enable you to control the maximum amount of concurrency, and can provide you with other benefits (such as prioritising different tasks), and amortises the expense of creating threads.
You can configure a thread pool to use a fixed length (bounded) task queue and to cause submitting threads to execute submitted tasks itself themselves when the queue is full. By doing that you can guarantee that tasks submitted to the thread pool are (eventually) executed. The documentation of ThreadPool.execute(Runnable) says it
Executes the given task sometime in the future
which suggests that the implementation guarantees that it will eventually run all submitted tasks even if you do not do those specific tasks to ensure submitted tasks are executed.
I recommend you to look at the Concurrency API. There are numerous pre-defined methods for general use. By using ExecutorService you can call the shutdown method after submitting tasks to the executor which stops accepting new tasks, waits for previously submitted tasks to execute, and then terminates the executor.
For a short introduction:
https://www.baeldung.com/java-executor-service-tutorial
In my Java application I have a Runnable such as:
this.runner = new Runnable({
#Override
public void run() {
// do something that takes roughly 5 seconds.
}
});
I need to run this roughly every 30 seconds (although this can vary) in a separate thread. The nature of the code is such that I can run it and forget about it (whether it succeeds or fails). I do this as follows as a single line of code in my application:
(new Thread(this.runner)).start()
Now, this works fine. However, I'm wondering if there is any sort of cleanup I should be doing on each of the thread instances after they finish running? I am doing CPU profiling of this application in VisualVM and I can see that, over the course of 1 hour runtime, a lot of threads are being created. Is this concern valid or is everything OK?
N.B. The reason I start a new Thread instead of simply defining this.runner as a Thread, is that I sometimes need to run this.runner twice simultaneously (before the first run call has finished), and I can't do that if I defined this.runner as a Thread since a single Thread object can only be run again once the initial execution has finished.
Java objects that need to be "cleaned up" or "closed" after use conventionally implement the AutoCloseable interface. This makes it easy to do the clean up using try-with-resources. The Thread class does not implement AutoCloseable, and has no "close" or "dispose" method. So, you do not need to do any explicit clean up.
However
(new Thread(this.runner)).start()
is not guaranteed to immediately start computation of the Runnable. You might not care whether it succeeds or fails, but I guess you do care whether it runs at all. And you might want to limit the number of these tasks running concurrently. You might want only one to run at once, for example. So you might want to join() the thread (or, perhaps, join with a timeout). Joining the thread will ensure that the thread will completes its computation. Joining the thread with a timeout increases the chance that the thread starts its computation (because the current thread will be suspended, freeing a CPU that might run the other thread).
However, creating multiple threads to perform regular or frequent tasks is not recommended. You should instead submit tasks to a thread pool. That will enable you to control the maximum amount of concurrency, and can provide you with other benefits (such as prioritising different tasks), and amortises the expense of creating threads.
You can configure a thread pool to use a fixed length (bounded) task queue and to cause submitting threads to execute submitted tasks itself themselves when the queue is full. By doing that you can guarantee that tasks submitted to the thread pool are (eventually) executed. The documentation of ThreadPool.execute(Runnable) says it
Executes the given task sometime in the future
which suggests that the implementation guarantees that it will eventually run all submitted tasks even if you do not do those specific tasks to ensure submitted tasks are executed.
I recommend you to look at the Concurrency API. There are numerous pre-defined methods for general use. By using ExecutorService you can call the shutdown method after submitting tasks to the executor which stops accepting new tasks, waits for previously submitted tasks to execute, and then terminates the executor.
For a short introduction:
https://www.baeldung.com/java-executor-service-tutorial
In a ForkJoinPool ForkJoinTask, does the current worker thread participate in work stealing?
I have read implications that a fork join pool can work steal from blocked or waiting threads. The current worker seems an obvious candidate. Once the worker calls .join() on another task, then that task is essentially blocked.
On the other hand, I see many articles that imply different conclusions. For example, the general consensus that the current worker thread should do work before waiting for forked tasks.
There are a few articles that discuss the use of ForkJoinTask.getSurplusQueuedTaskCount as a method of balancing the work in the queue by having the current worker do some of the work. If the current worker is also stealing, then this doesn't seem necessary.
Naturally, I would like to maximize thread operations and keep all workers running maximally. Understanding if the current thread also steals work (for example when .join is called) will help to clarify.
It is the responsibility of the ForkJoinPool to manage threads. Client code should feed it tasks, not micromanage the threading. Note that tasks and threads are two different things; tasks are units of work to be executed, and threads execute that work.
ForkJoinTask.compute() should fork() into smaller subtasks if the task is large enough to benefit from running parts of the task in parallel, and simply process the task if the task is small enough that it would better be run in a single thread. If the work turns out to be more than expected, it can fork() some of the work and do the rest of it.
If ForkJoinTask.compute() forks into smaller subtasks, it can call join() before returning. The ForkJoinPool will then either free the thread to work on other tasks, or spawn a temporary thread to work on other tasks to ensure the available parallelism is fully utilized.
I think it's reasonable to assume that the appropriate number of worker threads are kept busy for as long as there are uncompleted tasks, unless you explicitly block the thread in the compute() method.
The Sun tutorial provides more specifics on how to use these classes:
https://docs.oracle.com/javase/tutorial/essential/concurrency/forkjoin.html
I have a number of tasks that I would like to execute periodically at different rates for most tasks. Some of the tasks may be scheduled for simultaneous execution though. Also, a task may need to start executing while another is currently executing.
I would also like to customize each task by setting an object for it, on which the task will operate while it is being executed.
Usually, the tasks will execute in periods of 2 to 30 minutes and will take around 4-5 seconds, sometimes up to 30 seconds when they are executed.
I've found Executors.newSingleThreadedScheduledExecutor(ThreadFactory) to be almost exactly what I want, except that it might cause me problems if a new task happens to be scheduled for execution while another is already executing. This is due to the fact that the Executor is backed up by a single execution thread.
The alternative is to use Executors.newScheduledThreadPool(corePoolSize, ThreadFactory), but this requires me to create a number of threads in a pool. I would like to avoid creating threads until it is necessary, for instance if I have two or more tasks that happen to need parallell executing due to their colliding execution schedules.
For the case above, the Executors.newCachedThreadPool(ThreadFactory) appears to do what I want, but then I can't schedule my tasks. A combination of both cached and scheduled executors would be best I think, but I am unable to find something like that in Java.
What would be the best way to implement the above do you think?
Isn't ScheduledThreadPoolExecutor.ScheduledThreadPoolExecutor(int):
ScheduledThreadPoolExecutor executor = new ScheduledThreadPoolExecutor(0);
what you need? 0 is the corePoolSize:
corePoolSize - the number of threads to keep in the pool, even if they are idle, unless allowCoreThreadTimeOut is set
I guess you will not able to do that with ScheduledExecutor, because it uses DelayedWorkQueue where as newCachedThreadPool uses ThreadPoolExecutor SynchronousQueue as a work queue.
So you can not change implementation of ScheduledThreadPoolExecutor to act like that.
I have a sort of complex problem like below.
- we have a real time system with large number threads requirement. In order to optimize the performance, we are thinking of following design.
create a thread pool executor with max number of threads
each thread is used to create scheduled executor service.
now the tasks are being assigned to these executor services evenly based on load
BUT the biggest problem is, if one of the task in the queue contains a sleep (for few secs), it blocks the corresponding Schedule executor service thread for that duration and subsequently all the following tasks in that queue.
In this regard, please suggest me how to suspend the execution of the task with sleep OR overriding the sleep somehow and rejoin/schedule the task again to the queue.
Thanks in advance
Seshu
Assuming I understand your question, your Schedule Executor service threads have a deadline requirement, but the actual workers can sleep for an unknown length of time, possibly throwing off the timing of the Schedule Executors. From your description I'm guessing what you want is for a task that needs to sleep to actually stop, save progress information and then requeue itself for the remainder of the work to be rescheduled at some future time. You'd have to build this into your application architecture.
Alternatively, you could have the scheduler threads launch the worker tasks in their own separate threads, letting them sleep as necessary, with one scheduler thread collecting all the worker terminations.
To get a better answer you're going to have to provide more information about what you're trying to accomplish.
Tasks which sleep are inherently unfriendly for running in any kind of bounded thread pool. The sleep is explicitly telling the thread that it must do nothing for a period of time.
If possible, split the task into 2 (or more parts), eliminating the sleep completely. Get the first half-task to schedule the second task with an appropriate delay.
Failing that, you could consider increasing the size of your thread pool somewhat - either setting a much larger cap to its size, or possibly even eliminating the cap altogether (not recommended for a server than might end up with many clients).
Alternatively, move the tasks with sleep statements in them into their own Scheduled executor. Then, they'll delay each other, but better-behaved tasks, with no wait statements in them, will get preferential treatment.