Java Thread Pool ExecutorService: Threads execution order - java

As we create a Thread pool using Java's Executor service and submit threads to this thread pool, what is the order in which those threads get executed?
I want to ensure that threads submitted first, execute first.
For example, in the code below, I want first 5 threads to get executed first, followed by the next 5 threads and so on...
// Create a thread pool of 5 threads.
ScheduledExecutorService exService = Executors.newScheduledThreadPool(5, new ModifiedThreadFactory("ReadThreadPool"));
// Create 100 threads.
MyThread[] threads = createMyThreads(100);
// Submit these 100 threads to thread pool for execution.
for(MyThread thread : threads) {
exService.submit(thread);
}
Does Java's Thread Pool provide any API for this purpose, or do we need to implement a FIFO queue at our end to achieve this.
If Java's thread pool does not provide any such functionality, I am really interested to understand the reason behind the non-existence of this functionality as it appears like a very common use-case to me.
Is it technically not possible (which I think is quite unlikely), or is it just a miss?

That's the default behavior. ScheduledThreadExecutor (that you're using although you're not scheduling anything) extends from ThreadPoolExecutor. Tasks submitted to a ThreadPoolExecutor are stored in a BlockingQueue until one thread is available to take them and execute them. And queues are FIFO.
This is decscribed in details in the javadoc.

Threads do not get executed. Threads are the entities running taska like Runnable and Callable . Submiting such a task to a executor service will put it in it's inner BlockingQueue until it gets picked up by a thread from it's thread pool. This will still tell you nothing about the order of execution as different classes can do different things while implementing Runnable

Related

What's difference between request's thread and program's thread?

I'm almost new to Java. I know multithreading is the action of separating program into several tasks so that they can run concurrently. I have two problems with this concepts.
First of all, it's been said that application server creates a thread per each request. I can't associate this per-request-thread with program's threads. Suppose a program in which there are 5 threads to do things concurrently. How that single thread per request is going to deal with the 5 threads of that program?
Secondly, I have problem grasping the idea of thread pool. Is it about the threads that application server creates per request or it's regarding programs's threads that do tasks concurrently?
I have problem grasping the idea of thread pool.
A simple thread pool is a collection of running threads (a.k.a., worker threads) in which, each thread continually tries to take a task object from a BlockingQueue, and when it gets one, it executes the task, and then goes back to the queue to wait for another one.
A task is an object with some well-known method that the worker calls in order to "execute" the task. E.g., in the thread pools defined by the Java standard library, task objects either are Runnable instances or Callable instances, and the worker executes a task by calling task.run() or by task.call().

Difference between ForkJoinPool and normal ExecutionService?

I read a great article about the fork-join framework in Java 7, and the idea is that, with ForkJoinPool and ForkJoinTask, the threads in the pool can get the sub tasks from other tasks, so it's able to use less threads to handle more tasks.
Then I tried to use a normal ExecutorService to do the same work, and found I can't tell the difference, since when I submit a new task to the pool, the task will be run on another available thread.
The only difference I can tell is if I use ForkJoinPool, I don't need to pass the pool to the tasks, because I can call task.fork() to make it running on another thread. But with normal ExecutorService, I have to pass the pool to the task, or make it a static, so inside the task, I can call pool.submit(newTask)
Do I miss something?
(You can view the living code from https://github.com/freewind/fork-join-test/tree/master/src)
Although ForkJoinPool implements ExecutorService, it is conceptionally different from 'normal' executors.
You can easily see the difference if your tasks spawn more tasks and wait for them to complete, e.g. by calling
executor.invoke(new Task()); // blocks this thread until new task completes
In a normal executor service, waiting for other tasks to complete will block the current thread. There are two possible outcomes: If your executor service has a fixed number of threads, it might deadlock if the last running thread waits for another task to complete. If your executor dynamically creates new threads on demand, the number of threads might explode and you end up having thousands of threads which might cause starvation.
In opposite, the fork/join framework reuses the thread in the meantime to execute other tasks, so it won't deadlock although the number of threads is fixed:
new MyForkJoinTask().invoke();
So if you have a problem that you can solve recursively, think of using a ForkJoinPool as you can easily implement one level of recursion as ForkJoinTask.
Just check the number of running threads in your examples.

Java ThreadPool concepts, and issues with controlling the number of actual threads

I am a newbie to Java concurrency and am a bit confused by several concepts and implementation issues here. Hope you guys can help.
Say, I have a list of tasks stored in a thread-safe list wrapper:
ListWrapper jobs = ....
'ListWrapper' has synchronized fetch/push/append functions, and this 'jobs' object will be shared by multiple worker threads.
And I have a worker 'Runnable' to execute the tasks:
public class Worker implements Runnable{
private ListWrapper jobs;
public Worker(ListWrapper l){
this.jobs=l;
}
public void run(){
while(! jobs.isEmpty()){
//fetch an item from jobs and do sth...
}
}
}
Now in the main function I execute the tasks:
int NTHREADS =10;
ExecutorService service= Executors.newFixedThreadPool(NTHREADS);
//run threads..
int x=3;
for(int i=0; i<x; i++){
service.execute(new Worker(jobs) );
}
I tested this code with 'x=3', and I found that only 3 threads are running at the same time; but as I set 'x=20', I found that only 10 (=NTHREADS) are running at the same time. Seems to me the # of actual threads is the min of the two values.
Now my questions are:
1) Which value ('x' or 'NTHREADS') should I set to control the number of concurrent threads? Or it doesn't matter in either I choose?
2) How is this approach different from simply using the Producer-Consumer pattern --creating a fixed number of 'stud' threads to execute the tasks(shown in the code below)?
Thread t1= new Worker(jobs);
Thread t2= new Worker(jobs);
...
t1.join();
t2.join();
...
Thank you very much!!
[[ There are some good answers here but I thought I'd add some more detail. ]]
I tested this code with 'x=3', and I found that only 3 threads are running at the same time; but as I set 'x=20', I found that only 10 (=NTHREADS) are running at the same time. Seems to me the # of actual threads is the min of the two values.
No, not really. I suspect that the reason you weren't seeing 20 threads is that threads had already finished or had yet to be started. If you call new Thread(...).start() 20 times then you will get 20 threads started. However, if you check immediately none of them may have actually begun to run or if you check later they may have finished.
1) Which value ('x' or 'NTHREADS') should I set to control the number of concurrent threads? Or it doesn't matter in either I choose?
Quoting the Javadocs of Executors.newFixedThreadPool(...):
Creates a thread pool that reuses a fixed number of threads operating off a shared unbounded queue. At any point, at most nThreads threads will be active processing tasks.
So changing the NTHREADS constant changes the number of threads running in the pool. Changing x changes the number of jobs that are executed by those threads. You could have 2 threads in the pool and submit 1000 jobs or you could have 1000 threads and only submit 1 job for them to work on.
Btw, after you have submitted all of your jobs, you should then shutdown the pool which stops all of the threads if all of the jobs have been run.
service.shutdown();
2) How is this approach different from simply using the Producer-Consumer pattern --creating a fixed number of 'stud' threads to execute the tasks(shown in the code below)?
It differs in that it does all of the heavy work for you.
You don't have to create a ListWrapper of the jobs since you get one inside of the ExecutorService. You just submit the jobs to the ExecutorService and it keeps track of them until the threads are available to run them.
You don't have to create any threads or worry about them throwing exceptions and dying because the ExecutorService starts/restarts the threads for you.
If you want your tasks to return information you can make use of the submit(Callable) method and use the Future to get the results of the jobs. Etc, etc..
Doing this code yourself is going to be harder to get right, more code to maintain, and most likely will not perform as well as the code in the JDK that is battle tested and optimized.
You shouldn't create threads by yourself when using a threadpool. Instead of WorkerThread class you should use a class that implements Runnable but is not a thread. Passing a Thread object to the threadpool won't make the thread run actually. The object will be passed to a different internal thread, which will simply execute the run method of your WorkerThread class.
The ExecutorService is simply incompatible with the way you want to write your program.
In the code you have right now, these WorkerThreads will stop to work when your ListWrapper is empty. If you then add something to the list, nothing will happen. This is definitely not what you wanted.
You should get rid of ListWrapper and simply put your tasks directly into the threadpool. The threadpool already incorporates an internal list of jobs shared between the threads. You should just submit your jobs to the threadpool and it will handle them accordingly.
To answer your questions:
1) Which value ('x' or 'NTHREADS') should I set to control the number of concurrent threads? Or it doesn't matter in either I choose?
NTHREADS, the threadpool will create the necessary number of threads.
2) How is this approach different from simply using the Producer-Consumer pattern --creating a fixed number of 'stud' threads to execute the tasks(shown in the code below)?
It's just that ExecutorService automates a lot of things for you. You can choose from a lot of different implementations of threadpools and you can substitute them easily. You can use for instance a scheduled executor. You get extra functionality. Why reinvent the wheel?
For 1) NTHREADS is the maximum threads that the pool will ever run concurrently, but that doesn't mean there will always be that many running. It will only use as many as is needed up to that max value... which in your case is 3.
As the docs say:
At any point, at most nThreads threads will be active processing tasks. If additional tasks are submitted when all threads are active, they will wait in the queue until a thread is available
http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Executors.html#newFixedThreadPool-int-
As for 2) using Java's concurrent executors framework is preferred with new code. You get a lot of stuff for free and removes the need for having to handle all of the fiddly thread work yourself.
The number of threads passed into newFixedThreadPool is at most how many threads could be running executing your tasks. If you only have three tasks ever submitted I'd expect the ExecutorService to only create three threads.
To answer your questions:
You should use the number you pass into the constructor to control how many threads are going to be used to execute your tasks.
This differs because of the extra functionality the ExecutorService gives you, as well as the flexibility it gives you such as in the case you need to change your ExecutorService type or number of tasks you'll run (less lines of code to change).
All that is happening is the executor service is only creating as many threads as it needs. NTHREADS is effectively the maximum number of threads it'll create.
There is no point creating ten threads up front if it only has 3 tasks to complete, the other 7 will just be hanging around consuming resources.
If you submit more than NTHREADS number of tasks then it will process that number concurrently and the rest will wait on a queue until a thread becomes free.
This isn't any different from creating a fixed set of your own threads, except the thread management and scheduling is handled for you. The executor service also restarts threads if they are killed by rogue exceptions in your task which you'd otherwise have to code for.
See: The Javadoc on Executorservice.newFixedThreadPool

How does java.util.concurrent.Executor work?

How does java.util.concurrent.Executor create the "real" thread?
Suppose I am implementing Executor or using any executor service (like ThreadPoolExecutor). How does JVM internally work?
It calls ThreadFactory. Look at the Executors class. Note they all have an overloaded argument where you can supply a ThreadFactory implementation. The ThreadFactory interface is basically
public Thread newThread(Runnable runnable);
and the default implementation if not supplied basically just is return new Thread(runnable);
Why override this - well it's very useful for setting the Thread name and daemon status among other things.
Executor is ready made thread management interface.
Depending on type of executor it creates one or more threads. After thread finishes its task executor stops them or leave running. You can also have executor that run scheduled tasks (for example every minute). This is good alternative for creating many (often thousand of threads) that are needed for just five seconds or plenty of threads that are used from time time.
If you specify number of threads to create and submit more tasks than thread quantity is -- all other Runnable objects will be queued until their turn will come. No JVM magic here just java code.

java executor with pre-emptable thread queue

I'm looking for a java thread-pool, that won't run more threads simultaneously than there are cores in the system. This service is normally provided by a ThreadPoolExecutor using a BlockingQueue.
However, if a new thread is scheduled to execute, I want the new thread to pre-empt one of the already running threads, and add the the pre-empted thread (in a suspended state) to a task queue, so it can be resumed as soon as the new thread is finished.
Any suggestions?
I would make a subclass of ThreadPoolExecutor.
When you setup your ThreadPoolExecutor you want to set the corePoolSize and the maximumPoolSize to Runtime.getRuntime().availableProcessors() (Look at Executors.newFixedThreadPool() to see why this works).
Next you want to make sure that your Queue also implements Deque. LinkedBlockingDeque is an example but you should shop around to see which one will work best for you. A Deque allows you to get stack like LIFO behavior which is exactly what you want.
Since everything (submit(), invokeAll()) funnels through execute() you will want to override this method. Basically do what you described above:
Check if all threads are running. If not simply start the new runnable on an available thread. If all the threads are already running then you need to find the one running the oldest runnable, stop the runnable, re-queue the runnable somewhere (maybe at the beginning?), and then start your new runnable.
The idea of a ThreadPoolExecutor is to avoid all of the expensive actions related to creating and destroying a thread. If you absolutely insist on preempting the running tasks, then you won't get that from the default API.
If you are willing to allow the running tasks to complete and instead only preempt the tasks which have not begun execution, then you can use a BlockingQueue implementation which works like a Stack (LIFO).
You can also have tasks 'preempt' other tasks by using different executors with different thread priorities. Essentially, if the OS supports time-slicing, then the higher priority executor gets the time-slice.
Otherwise, you need a custom implementation which manages execution. You could use a SynchronousQueue and have P worker threads waiting on it. If a client calls execute and SynchronousQueue.offer fails, then you would have to create a special worker Thread which grabs one of the other Threads and flags them to halt before executing and again flags them to resume after executing.

Categories