Difference between Executors.newFixedThreadPool(1) and Executors.newSingleThreadExecutor() - java

My question is : does it make sense to use Executors.newFixedThreadPool(1)??. In two threads (main + oneAnotherThread) scenarios is it efficient to use executor service?. Is creating a new thread directly by calling new Runnable(){ } better than using ExecutorService?. What are the upsides and downsides of using ExecutorService for such scenarios?
PS: Main thread and oneAnotherThread dont access any common resource(s).
I have gone through : What are the advantages of using an ExecutorService?. and Only one thread at a time!

does it make sense to use Executors.newFixedThreadPool(1)?
It is essentially the same thing as an Executors.newSingleThreadExecutor() except that the latter is not reconfigurable, as indicated in the javadoc, whereas the former is if you cast it to a ThreadPoolExecutor.
In two threads (main + oneAnotherThread) scenarios is it efficient to use executor service?
An executor service is a very thin wrapper around a Thread that significantly facilitates the thread lifecycle management. If the only thing you need is to new Thread(runnable).start(); and move on, then there is no real need for an ExecutorService.
In any most real life cases, the possibility to monitor the life cycle of the tasks (through the returned Futures), the fact that the executor will re-create threads as required in case of uncaught exceptions, the performance gain of recycling threads vs. creating new ones etc. make the executor service a much more powerful solution at little additional cost.
Bottom line: I don't see any downsides of using an executor service vs. a thread.
The difference between Executors.newSingleThreadExecutor().execute(command) and new Thread(command).start(); goes through the small differences in behaviour between the two options.

Sometimes need to use Executors.newFixedThreadPool(1) to determine number of tasks in the queue
private final ExecutorService executor = Executors.newFixedThreadPool(1);
public int getTaskInQueueCount() {
ThreadPoolExecutor threadPoolExecutor = (ThreadPoolExecutor) executor;
return threadPoolExecutor.getQueue().size();
}

does it make sense to use Executors.newFixedThreadPool(1)??
Yes. It makes sense If you want to process all submitted tasks in order of arrival
In two threads (main + oneAnotherThread) scenarios is it efficient to use executor service? Is creating a new thread directly by calling new Runnable(){ } better than using ExecutorService?.
I prefer ExecutorService or ThreadPoolExecutor even for 1 thread.
Refer to below SE question for explanation for advantages of ThreadPoolExecutor over new Runnable() :
ExecutorService vs Casual Thread Spawner
What are the upsides and downsides of using ExecutorService for such scenarios?
Have a look at related SE question regarding ExexutorService use cases :
Java's Fork/Join vs ExecutorService - when to use which?
Regarding your query in subject line (from grepcode), both are same:
newFixedThreadPool API will return ThreadPoolExecutor as ExecutorService:
public static ExecutorService newFixedThreadPool(int nThreads) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
and
newSingleThreadExecutor() return ThreadPoolExecutor as ExecutorService:
public static ExecutorService newSingleThreadExecutor() {
return new FinalizableDelegatedExecutorService
(new ThreadPoolExecutor(1, 1,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>()));
I agree with #assylias answer regarding similarities/differences.

Is creating a new thread directly by calling new Runnable(){ } better than using ExecutorService?
If you want to compute something on the returned result after thread compilation, you can use Callable interface, which can be used with ExecutorService only, not with new Runnable(){}. The ExecutorService's submit() method, which take the Callable object as an arguement, returns the Future object. On this Future object you check whether the task has been completed on not using isDone() method. Also you can get the results using get() method.
In this case, ExecutorService is better than the new Runnable(){}.

Related

Java parallelstream not using optimal number of threads when using newCachedThreadPool()

I have made two separate implementations of parallel reads from database.
First implementation is using ExecutorService with newCachedThreadPool() constructor and Futures: I simply make a call that returns a future for each read case and then after I make all the calls I call get() on them. This implementation works OK and is fast enough.
The second implementation is using parallel streams. When I put parallel stream call into the same ExecutorService pool it works almost 5 times slower and it seems that it is not using as many threads as I would hope. When I instead put it into ForkJoinPool pool = new ForkJoinPool(50) then it works as fast as the previous implementation.
My question is:
Why do parallel streams under-utilize threads in newCachedThreadPool version?
Here is the code for the second implementation (I am not posting the first implementation, cause that one works OK anyway):
private static final ExecutorService pool = Executors.newCachedThreadPool();
final List<AbstractMap.SimpleImmutableEntry<String, String>> simpleImmutableEntryStream =
personIdList.stream().flatMap(
personId -> movieIdList.stream().map(
movieId -> new AbstractMap.SimpleImmutableEntry<>(personId, movieId))).collect(Collectors.toList());
final Future<Map<String, List<Summary>>> futureMovieSummaryForPerson = pool.submit(() -> {
final Stream<Summary> summaryStream = simpleImmutableEntryStream.parallelStream().map(
inputPair -> {
return FeedbackDao.find(inputPair.getKey(), inputPair.getValue());
}).filter(Objects::nonNull);
return summaryStream.collect(Collectors.groupingBy(Summary::getPersonId));
});
This is related to how ForkJoinTask.fork is implemented, if the current thread comes from a ForkJoinPool it will use the same pool to submit the new tasks but if not it will use the common pool with the total amount of processors in your local machine and here when you create your pool with Executors.newCachedThreadPool(), the thread created by this pool is not recognized as coming from a ForkJoinPool such that it uses the common pool.
Here is how it is implemented, it should help you to better understand:
public final ForkJoinTask<V> fork() {
Thread t;
if ((t = Thread.currentThread()) instanceof ForkJoinWorkerThread)
((ForkJoinWorkerThread)t).workQueue.push(this);
else
ForkJoinPool.common.externalPush(this);
return this;
}
The thread created by the pool Executors.newCachedThreadPool() will not be of type ForkJoinWorkerThread such that it will use the common pool with an under optimized pool size to submit the new tasks.

ExecutorService SingleThreadExecutor

I have a list of objects, from which depending on user interaction some objects need to do work asynchronically. Something like this:
for(TheObject o : this.listOfObjects) {
o.doWork();
}
The class TheObject implements an ExecutorService (SingleThread!), which is used to do the work. Every object of type TheObject instantiates an ExecutorService. I don't want to make lasagna code. I don't have enough Objects at the same time, to make an extra extraction layer with thread pooling needed.
I want to cite the Java Documentation about CachedThreadPools:
Threads that have not been used for sixty seconds are terminated and
removed from the cache. Thus, a pool that remains idle for long enough
will not consume any resources.
First question: Is this also true for a SingleThreadExecutor? Does the thread get terminated? JavaDoc doesn't say anything about SingleThreadExecutor. It wouldn't even matter in this application, as I have an amount of objects I can count on one hand. Just curiosity.
Furthermore the doWork() method of TheObject needs to call the ExecutorService#.submit() method to do the work async. Is it possible (I bet it is) to call the doWork() method implicitly? Is this a viable way of designing an async method?
void doWork() {
if(!isRunningAsync) {
myExecutor.submit(doWork());
} else {
// Do Work...
}
}
First question: Is this also true for a SingleThreadExecutor? Does the thread get terminated?
Take a look at the source code of Executors, comparing the implementations of newCachedThreadPool and newSingleThreadExecutor:
public static ExecutorService newCachedThreadPool() {
return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
60L, TimeUnit.SECONDS,
new SynchronousQueue<Runnable>());
}
public static ExecutorService newSingleThreadExecutor() {
return new FinalizableDelegatedExecutorService
(new ThreadPoolExecutor(1, 1,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>()));
}
The primary difference (of interest here) is the 60L, TimeUnit.SECONDS and 0L, TimeUnit.MILLISECONDS.
Effectively (but not actually), these parameters are passed to ThreadPoolExecutor.setKeepAliveTime. Looking at the Javadoc of that method:
A time value of zero will cause excess threads to terminate immediately after executing tasks.
where "excess threads" actually refers to "threads in excess of the core pool size".
The cached thread pool is created with zero core threads, and an (effectively) unlimited number of non-core threads; as such, any of its threads can be terminated after the keep alive time.
The single thread executor is created with 1 core thread and zero non-core threads; as such, there are no threads which can be terminated after the keep alive time: its one core thread remains active until you shut down the entire ThreadPoolExecutor.
(Thanks to #GPI for pointing out that I was wrong in my interpretation before).
First question:
Threads that have not been used for sixty seconds are terminated and removed from the cache. Thus, a pool that remains idle for long enough will not consume any resources.
Is this also true for a SingleThreadExecutor?
SingleThreadExecutor works differently. It don't have time-out concept due to the values configured during creation.
Termination of SingleThread is possible. But it guarantees that always one Thread exists to handle tasks from task queue.
From newSingleThreadExecutor documentation:
public static ExecutorService newSingleThreadExecutor()
Creates an Executor that uses a single worker thread operating off an unbounded queue. (Note however that if this single thread terminates due to a failure during execution prior to shutdown, a new one will take its place if needed to execute subsequent tasks.)
Tasks are guaranteed to execute sequentially, and no more than one task will be active at any given time. Unlike the otherwise equivalent newFixedThreadPool(1) the returned executor is guaranteed not to be reconfigurable to use additional threads.
Second question:
Furthermore the doWork() method of TheObject needs to call the ExecutorService#.submit() method to do the work async
for(TheObject o : this.listOfObjects) {
o.doWork();
}
can be changed to
ExecutorService executorService = Executors.newSingleThreadExecutor();
executorService.execute(new Runnable() {
public void run() {
System.out.println("Asynchronous task");
}
});
executorService.shutdown();
with Callable or Runnable interface and add your doWork() code in run() method or call() method. The task will be executed concurrently.

ThreadPoolExecutor Vs ExecutorService for service time out use cases

I am going to implement Timeout framework between two services. I am looking at pros & cons of ThreadPoolExecutor VS ExecutorService
Code with ExecutorService.
ExecutorService service = Executors.newFixedThreadPool(10);
for ( int i=0; i<10; i++){
MyCallable myCallable = new MyCallable((long)i);
Future<Long> futureResult = service.submit(myCallable);
Long result = null;
try{
result = futureResult.get(5000, TimeUnit.MILLISECONDS);
}catch(TimeoutException e){
System.out.println("Time out after 5 seconds");
futureResult.cancel(true);
}catch(InterruptedException ie){
System.out.println("Error: Interrupted");
}catch(ExecutionException ee){
System.out.println("Error: Execution interrupted");
}
System.out.println("Result:"+result);
}
service.shutdown();
Code snippet for MyCallable
class MyCallable implements Callable{
Long id = 0L;
public MyCallable(Long val){
this.id = val;
}
public Long call(){
// **Call a service and get id from the service**
return id;
}
}
If I want to implement with ThreadPoolExecutor, I will code in this way
/* Thread pool Executor */
BlockingQueue queue = new ArrayBlockingQueue(300);
ThreadPoolExecutor eventsExecutor =
new ThreadPoolExecutor(1, 10, 60,
TimeUnit.SECONDS, queue, new MyRejectionHandler());
/* I can submit the tasks as for above code example used in future */
Now I am looking at pros & cons of using ThreadPoolExecutor Vs ExecutorService. Please don't think that this question is duplicate of ExectuorService vs ThreadPoolExecutor (which is using LinkedBlockingQueue).
I have some queries after reading above question and hence posted this question.
It was recommended to use ExecutorSevice with Executors.XXX methods. If I use Executors.XXX() methods, do I have capabilities to set RejectionHandler, BlockingQueue size etc? If not, Do I have to fall back on ThreadPoolExecutor?
Does ThreadPoolExecutor implemented by ExeuctorService offers unbounded queue? I am implementing Timeout framework between two services.
Which one is best option between these two? Or Do I have other best option ?
It was recommended to use ExecutorSevice with Executors.XXX methods. If I use Executors.XXX() methods, do I have capabilities to set RejectionHandler, BlockingQueue size etc? If not, Do I have to fall back on ThreadPoolExecutor?
No, you can't specify these things via Executors factory methods. However, take a look at the source code of Executors: you will see that its newXXX methods simply wrap calls to create ThreadPoolExecutor instances.
As such, there is no particular advantage to using Executors, aside from the convenience of not having to specify many of the parameters. If you need to specify these additional capabilities, you will need to create the ThreadPoolExecutor instances directly.
Does ExeuctorService offers unbounded queue? I am implementing Timeout framework between two services. Which one is best option between these two? Or Do I have other best option (e.g. CountDownLatch etc.)
ExecutorService is an interface: it offers you nothing by way of implementation details such as unbounded queues.

Differences between these 2 factory methods

I would like to know the difference between these 2 methods:
public static ExecutorService newFixedThreadPool(int nThreads)
and
public static ExecutorService newFixedThreadPool(int nThreads, ThreadFactory tf)
Obviously one takes a specified ThreadFactory for threads creation. However I would like to know what kind of standard ThreadFactory the former use?
Why is it convenient using the latter rather than the former or vice-versa?
Thanks in advance.
DefaultThreadFactory,
New threads are created using a ThreadFactory. If not otherwise
specified, a Executors.defaultThreadFactory() is used, that creates
threads to all be in the same java.lang.ThreadGroup and with the same
NORM_PRIORITY priority and non-daemon status. By supplying a different
ThreadFactory, you can alter the thread's name, thread group,
priority, daemon status, etc. If a ThreadFactory fails to create a
thread when asked by returning null from newThread, the executor will
continue, but might not be able to execute any tasks. Threads should
possess the "modifyThread" RuntimePermission. If worker threads or
other threads using the pool do not possess this permission, service
may be degraded: configuration changes may not take effect in a timely
manner, and a shutdown pool may remain in a state in which termination
is possible but not completed.
Reference -
But you can encapsulate the thread creation in your ThreadFactory, what actaully usage of Factory pattern.
For Example -
class SimpleThreadFactory implements ThreadFactory {
public Thread newThread(Runnable r) {
// do something
return new Thread(r);
}
}
For reference please check - documentation and also find a good answer.
The first one uses the DefaultThreadFactory which is an inner class of Executors. When you define your own ThreadFactory you can influence the created Threads. You can choose their name, priority, etc.
The first uses Executors.defaultThreadFactory to create threads with the first version. You would use the first version if you don't care how the threads are created, and the second if you need to impose some custom settings on the threads when they are created.

How to scale threads according to CPU cores?

I want to solve a mathematical problem with multiple threads in Java. my math problem can be separated into work units, that I want to have solved in several threads.
I don't want to have a fixed amount of threads working on it, but instead an amount of threads matching the amount of CPU cores. My problem is, that I couldn't find an easy tutorial in the internet for this. All I found are examples with fixed threads.
How can this be done? Can you provide examples?
You can determine the number of processes available to the Java Virtual Machine by using the static Runtime method, availableProcessors. Once you have determined the number of processors available, create that number of threads and split up your work accordingly.
Update: To further clarify, a Thread is just an Object in Java, so you can create it just like you would create any other object. So, let's say that you call the above method and find that it returns 2 processors. Awesome. Now, you can create a loop that generates a new Thread, and splits the work off for that thread, and fires off the thread. Here's some pseudocode to demonstrate what I mean:
int processors = Runtime.getRuntime().availableProcessors();
for(int i=0; i < processors; i++) {
Thread yourThread = new AThreadYouCreated();
// You may need to pass in parameters depending on what work you are doing and how you setup your thread.
yourThread.start();
}
For more information on creating your own thread, head to this tutorial. Also, you may want to look at Thread Pooling for the creation of the threads.
You probably want to look at the java.util.concurrent framework for this stuff too.
Something like:
ExecutorService e = Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors());
// Do work using something like either
e.execute(new Runnable() {
public void run() {
// do one task
}
});
or
Future<String> future = pool.submit(new Callable<String>() {
public String call() throws Exception {
return null;
}
});
future.get(); // Will block till result available
This is a lot nicer than coping with your own thread pools etc.
Option 1:
newWorkStealingPool from Executors
public static ExecutorService newWorkStealingPool()
Creates a work-stealing thread pool using all available processors as its target parallelism level.
With this API, you don't need to pass number of cores to ExecutorService.
Implementation of this API from grepcode
/**
* Creates a work-stealing thread pool using all
* {#link Runtime#availableProcessors available processors}
* as its target parallelism level.
* #return the newly created thread pool
* #see #newWorkStealingPool(int)
* #since 1.8
*/
public static ExecutorService newWorkStealingPool() {
return new ForkJoinPool
(Runtime.getRuntime().availableProcessors(),
ForkJoinPool.defaultForkJoinWorkerThreadFactory,
null, true);
}
Option 2:
newFixedThreadPool API from Executors or other newXXX constructors, which returns ExecutorService
public static ExecutorService newFixedThreadPool(int nThreads)
replace nThreads with Runtime.getRuntime().availableProcessors()
Option 3:
ThreadPoolExecutor
public ThreadPoolExecutor(int corePoolSize,
int maximumPoolSize,
long keepAliveTime,
TimeUnit unit,
BlockingQueue<Runnable> workQueue)
pass Runtime.getRuntime().availableProcessors() as parameter to maximumPoolSize.
Doug Lea (author of the concurrent package) has this paper which may be relevant:
http://gee.cs.oswego.edu/dl/papers/fj.pdf
The Fork Join framework has been added to Java SE 7. Below are few more references:
http://www.ibm.com/developerworks/java/library/j-jtp11137/index.html
Article by Brian Goetz
http://www.oracle.com/technetwork/articles/java/fork-join-422606.html
The standard way is the Runtime.getRuntime().availableProcessors() method.
On most standard CPUs you will have returned the optimal thread count (which is not the actual CPU core count) here. Therefore this is what you are looking for.
Example:
ExecutorService service = Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors());
Do NOT forget to shut down the executor service like this (or your program won't exit):
service.shutdown();
Here just a quick outline how to set up a future based MT code (offtopic, for illustration):
CompletionService<YourCallableImplementor> completionService =
new ExecutorCompletionService<YourCallableImplementor>(service);
ArrayList<Future<YourCallableImplementor>> futures = new ArrayList<Future<YourCallableImplementor>>();
for (String computeMe : elementsToCompute) {
futures.add(completionService.submit(new YourCallableImplementor(computeMe)));
}
Then you need to keep track on how many results you expect and retrieve them like this:
try {
int received = 0;
while (received < elementsToCompute.size()) {
Future<YourCallableImplementor> resultFuture = completionService.take();
YourCallableImplementor result = resultFuture.get();
received++;
}
} finally {
service.shutdown();
}
On the Runtime class, there is a method called availableProcessors(). You can use that to figure out how many CPUs you have. Since your program is CPU bound, you would probably want to have (at most) one thread per available CPU.

Categories