public static ExecutorService newFixedThreadPool(int nThreads) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
}
Use LinkedBlockingQueue, But does ConcurrentLinkedQueue should be more efficient , which is no-blocking and lock free?
The answer is simple: ConcurrentLinkedQueue is not a BlockingQueue, but LinkedBlockingQueue is. The ThreadPoolExecutor constructors expect BlockingQueues, so that Executors creates instances also with other implementations of BlockingQueue, like SynchronousQueue or ArrayBlokingQueue (depending on the factory method you call in Executors).
So, the more general question is: why BlockingQueue and not a simple Queue. And here the answer is again simple: the BlockingQueue interface has more useful methods. E.g a method tell if an element could be inserted in the queue without violating the capacity restrictions (and without throwing Exception, check BlockingQueue.offer()). Or some methods block if the queue has no elements (which again has a timed poll() and not timed version take()). So, if you would check the implementation of the ThreadPoolExecutor, you would see that it calls these convenient methods that miss in the Queue interface.
If you submit Runnable objects to the queue and there are no running threads to consume those tasks, they of course will not be executed. When the queue becomes empty, the pool must block and wait for more tasks. So, to implement this behavior we are using BlockingQueue while interacting with pool.
Related
I want to use the same thread pool throughout my application. To this end, I can make ExecutorService static and global so that I can invoke ThreadUtil.executorService to get ExecutorService when I need it.
public class ThreadUtil {
public static final ExecutorService executorService = Executors.newCachedThreadPool();
}
Is it OK to instance multiple thread pools like this?
In addition, my application is a TCP server. If I don't know how big the pool should be, is it ok to simply use newCachedThreadPool?
When an instance with the same properties is to be used anywhere in your program, it is logical to declare it static and final instead of re-creating the instance each time but I would personally opt for a Singleton pattern instead of directly giving public access to the instance.
As for your second query, I don't see any problem with it. The first sentence of the documentation for newCachedThreadPool says
Creates a thread pool that creates new threads as needed
since you don't know how many threads will be created, this is the most logical choice.
Note that newCachedThreadPool will re-use old threads when they are available to increase performance.
I would not make it directly global. At least wrap it in a class so you can easily use more than one pool. Having a pool of thread pool is very useful when you need more than one kind of job/ jobs with different priority. Just put fewer threads in the other pool and/or lower priority threads (by over riding the thread factory). For a sample can see https://github.com/tgkprog/ddt/tree/master/DdtUtils/src/main/java/org/s2n/ddt/util/threads
Usage :
//setup
PoolOptions options = new PoolOptions();
options.setCoreThreads(2);
options.setMaxThreads(33);
DdtPools.initPool("poolA", options);
Do1 p = null;
// Offer a job:
job = new YourJob();
DdtPools.offer("poolA", job);
Also do not use a cached pool as it can grow as needed, not a good idea with TCP which can block indefinately. You want to use a controlled pool. The above library can be reinitialized if needed (increase number of threads while allowing current jobs to process before the old pool is discarded to GC).
Can make a utility jsp/ servlet for those ops like https://github.com/tgkprog/ddt/blob/master/Core/src/main/web/common/poolEnd.jsp and https://github.com/tgkprog/ddt/blob/master/Core/src/main/web/common/threads.jsp
If you have only ExecutorServivce for your application, you can proceed with static global.
newCachedThreadPool() and newFixedThreadPool() both does not provide control on queue of Callable/Runnable tasks. They use unbounded queue, which may result into degraded performance of the system with respect to performance.
I prefer to use ThreadPoolExecutor which provides better control on various parameters like Queue Size, Rejection Handler, Thread factory etc.
public ThreadPoolExecutor(int corePoolSize,
int maximumPoolSize,
long keepAliveTime,
TimeUnit unit,
BlockingQueue<Runnable> workQueue,
RejectedExecutionHandler handler)
Or
public ThreadPoolExecutor(int corePoolSize,
int maximumPoolSize,
long keepAliveTime,
TimeUnit unit,
BlockingQueue<Runnable> workQueue,
ThreadFactory threadFactory,
RejectedExecutionHandler handler)
Refer to below post for more details:
FixedThreadPool vs CachedThreadPool: the lesser of two evils
My question is : does it make sense to use Executors.newFixedThreadPool(1)??. In two threads (main + oneAnotherThread) scenarios is it efficient to use executor service?. Is creating a new thread directly by calling new Runnable(){ } better than using ExecutorService?. What are the upsides and downsides of using ExecutorService for such scenarios?
PS: Main thread and oneAnotherThread dont access any common resource(s).
I have gone through : What are the advantages of using an ExecutorService?. and Only one thread at a time!
does it make sense to use Executors.newFixedThreadPool(1)?
It is essentially the same thing as an Executors.newSingleThreadExecutor() except that the latter is not reconfigurable, as indicated in the javadoc, whereas the former is if you cast it to a ThreadPoolExecutor.
In two threads (main + oneAnotherThread) scenarios is it efficient to use executor service?
An executor service is a very thin wrapper around a Thread that significantly facilitates the thread lifecycle management. If the only thing you need is to new Thread(runnable).start(); and move on, then there is no real need for an ExecutorService.
In any most real life cases, the possibility to monitor the life cycle of the tasks (through the returned Futures), the fact that the executor will re-create threads as required in case of uncaught exceptions, the performance gain of recycling threads vs. creating new ones etc. make the executor service a much more powerful solution at little additional cost.
Bottom line: I don't see any downsides of using an executor service vs. a thread.
The difference between Executors.newSingleThreadExecutor().execute(command) and new Thread(command).start(); goes through the small differences in behaviour between the two options.
Sometimes need to use Executors.newFixedThreadPool(1) to determine number of tasks in the queue
private final ExecutorService executor = Executors.newFixedThreadPool(1);
public int getTaskInQueueCount() {
ThreadPoolExecutor threadPoolExecutor = (ThreadPoolExecutor) executor;
return threadPoolExecutor.getQueue().size();
}
does it make sense to use Executors.newFixedThreadPool(1)??
Yes. It makes sense If you want to process all submitted tasks in order of arrival
In two threads (main + oneAnotherThread) scenarios is it efficient to use executor service? Is creating a new thread directly by calling new Runnable(){ } better than using ExecutorService?.
I prefer ExecutorService or ThreadPoolExecutor even for 1 thread.
Refer to below SE question for explanation for advantages of ThreadPoolExecutor over new Runnable() :
ExecutorService vs Casual Thread Spawner
What are the upsides and downsides of using ExecutorService for such scenarios?
Have a look at related SE question regarding ExexutorService use cases :
Java's Fork/Join vs ExecutorService - when to use which?
Regarding your query in subject line (from grepcode), both are same:
newFixedThreadPool API will return ThreadPoolExecutor as ExecutorService:
public static ExecutorService newFixedThreadPool(int nThreads) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
and
newSingleThreadExecutor() return ThreadPoolExecutor as ExecutorService:
public static ExecutorService newSingleThreadExecutor() {
return new FinalizableDelegatedExecutorService
(new ThreadPoolExecutor(1, 1,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>()));
I agree with #assylias answer regarding similarities/differences.
Is creating a new thread directly by calling new Runnable(){ } better than using ExecutorService?
If you want to compute something on the returned result after thread compilation, you can use Callable interface, which can be used with ExecutorService only, not with new Runnable(){}. The ExecutorService's submit() method, which take the Callable object as an arguement, returns the Future object. On this Future object you check whether the task has been completed on not using isDone() method. Also you can get the results using get() method.
In this case, ExecutorService is better than the new Runnable(){}.
This question already has an answer here:
Thread safe queue in Java
(1 answer)
Closed 9 years ago.
I got a Queue in my Server class:
public class Server{
Queue<Input> inputQueue;
public void start() {
inThread = new InThread(inputQueue);
outThread = new OutThread(inputQueue);
inThread.start();
outThread.start();
}
}
The inThread will responsible for fill in data in the inputQueue, and the outThread will responsible for take out data from the outputQueue. InThread and OutThread will execute concurrently. Will there a chance that the data will not thread safe? If so, I did some study, it have synchronized variable and synchronized method. Which one should I use? Thanks.
As the queue is shared between threads so it is not thread safe without sunchronizing your read and write methods.
If you want a thread safe queue readymade then you may consider using ConcurrentLinkedQueue:
http://docs.oracle.com/javase/6/docs/api/java/util/concurrent/ConcurrentLinkedQueue.html
There isn't much of a need for hand-synchronization, although your existing approach is not threadsafe.
java.util.concurrent.ArrayBlockingQueue<E> can work for you. Its constructor allows one to pass in a collection, so you can do:
ArrayBlockingQueue<Input>blockingQueue=new ArrayBlockingQueue<>(128, false, inputQueue);
changing 128 as the capacity and the boolean as specifying whether queueing after blocks should be fair.
You can also use ConcurrentLinkedQueue. Its constructor is just:
ConcurrentLinkedQueue<Input> clq=new ConcurrentLinkedQueue<>(inputQueue);
I would use an ExecutorService which wraps a thread pool and a Queue and is thread safe/tested/is written for you.
You are right that some Queues require external synchronization but if you use a thread safe collection like ConcurrentLinkedQueue you don't need synchronized, though it won't hurt much if you do.
To me Your question seems to be the case of standard producer consumer case with Wait and Notify .
Just google producer consumer example in java , you will get thousands of examples.Hope it helps.
I would like to know the difference between these 2 methods:
public static ExecutorService newFixedThreadPool(int nThreads)
and
public static ExecutorService newFixedThreadPool(int nThreads, ThreadFactory tf)
Obviously one takes a specified ThreadFactory for threads creation. However I would like to know what kind of standard ThreadFactory the former use?
Why is it convenient using the latter rather than the former or vice-versa?
Thanks in advance.
DefaultThreadFactory,
New threads are created using a ThreadFactory. If not otherwise
specified, a Executors.defaultThreadFactory() is used, that creates
threads to all be in the same java.lang.ThreadGroup and with the same
NORM_PRIORITY priority and non-daemon status. By supplying a different
ThreadFactory, you can alter the thread's name, thread group,
priority, daemon status, etc. If a ThreadFactory fails to create a
thread when asked by returning null from newThread, the executor will
continue, but might not be able to execute any tasks. Threads should
possess the "modifyThread" RuntimePermission. If worker threads or
other threads using the pool do not possess this permission, service
may be degraded: configuration changes may not take effect in a timely
manner, and a shutdown pool may remain in a state in which termination
is possible but not completed.
Reference -
But you can encapsulate the thread creation in your ThreadFactory, what actaully usage of Factory pattern.
For Example -
class SimpleThreadFactory implements ThreadFactory {
public Thread newThread(Runnable r) {
// do something
return new Thread(r);
}
}
For reference please check - documentation and also find a good answer.
The first one uses the DefaultThreadFactory which is an inner class of Executors. When you define your own ThreadFactory you can influence the created Threads. You can choose their name, priority, etc.
The first uses Executors.defaultThreadFactory to create threads with the first version. You would use the first version if you don't care how the threads are created, and the second if you need to impose some custom settings on the threads when they are created.
From the JavaDocs:
A ConcurrentLinkedQueue is an appropriate choice when many threads will share access to a common collection. This queue does not permit null elements.
ArrayBlockingQueue is a classic "bounded buffer", in which a fixed-sized array holds elements inserted by producers and extracted by consumers. This class supports an optional fairness policy for ordering waiting producer and consumer threads
LinkedBlockingQueue typically have higher throughput than array-based queues but less predictable performance in most concurrent applications.
I have 2 scenarios, one requires the queue to support many producers (threads using it) with one consumer and the other is the other way around.
I do not understand which implementation to use. Can somebody explain what the differences are?
Also, what is the 'optional fairness policy' in the ArrayBlockingQueue?
ConcurrentLinkedQueue means no locks are taken (i.e. no synchronized(this) or Lock.lock calls). It will use a CAS - Compare and Swap operation during modifications to see if the head/tail node is still the same as when it started. If so, the operation succeeds. If the head/tail node is different, it will spin around and try again.
LinkedBlockingQueue will take a lock before any modification. So your offer calls would block until they get the lock. You can use the offer overload that takes a TimeUnit to say you are only willing to wait X amount of time before abandoning the add (usually good for message type queues where the message is stale after X number of milliseconds).
Fairness means that the Lock implementation will keep the threads ordered. Meaning if Thread A enters and then Thread B enters, Thread A will get the lock first. With no fairness, it is undefined really what happens. It will most likely be the next thread that gets scheduled.
As for which one to use, it depends. I tend to use ConcurrentLinkedQueue because the time it takes my producers to get work to put onto the queue is diverse. I don't have a lot of producers producing at the exact same moment. But the consumer side is more complicated because poll won't go into a nice sleep state. You have to handle that yourself.
Basically the difference between them are performance characteristics and blocking behavior.
Taking the easiest first, ArrayBlockingQueue is a queue of a fixed size. So if you set the size at 10, and attempt to insert an 11th element, the insert statement will block until another thread removes an element. The fairness issue is what happens if multiple threads try to insert and remove at the same time (in other words during the period when the Queue was blocked). A fairness algorithm ensures that the first thread that asks is the first thread that gets. Otherwise, a given thread may wait longer than other threads, causing unpredictable behavior (sometimes one thread will just take several seconds because other threads that started later got processed first). The trade-off is that it takes overhead to manage the fairness, slowing down the throughput.
The most important difference between LinkedBlockingQueue and ConcurrentLinkedQueue is that if you request an element from a LinkedBlockingQueue and the queue is empty, your thread will wait until there is something there. A ConcurrentLinkedQueue will return right away with the behavior of an empty queue.
Which one depends on if you need the blocking. Where you have many producers and one consumer, it sounds like it. On the other hand, where you have many consumers and only one producer, you may not need the blocking behavior, and may be happy to just have the consumers check if the queue is empty and move on if it is.
Your question title mentions Blocking Queues. However, ConcurrentLinkedQueue is not a blocking queue.
The BlockingQueues are ArrayBlockingQueue, DelayQueue, LinkedBlockingDeque, LinkedBlockingQueue, PriorityBlockingQueue, and SynchronousQueue.
Some of these are clearly not fit for your purpose (DelayQueue, PriorityBlockingQueue, and SynchronousQueue). LinkedBlockingQueue and LinkedBlockingDeque are identical, except that the latter is a double-ended Queue (it implements the Deque interface).
Since ArrayBlockingQueue is only useful if you want to limit the number of elements, I'd stick to LinkedBlockingQueue.
ArrayBlockingQueue has lower memory footprint, it can reuse element node, not like LinkedBlockingQueue that have to create a LinkedBlockingQueue$Node object for each new insertion.
SynchronousQueue ( Taken from another question )
SynchronousQueue is more of a handoff, whereas the LinkedBlockingQueue just allows a single element. The difference being that the put() call to a SynchronousQueue will not return until there is a corresponding take() call, but with a LinkedBlockingQueue of size 1, the put() call (to an empty queue) will return immediately. It's essentially the BlockingQueue implementation for when you don't really want a queue (you don't want to maintain any pending data).
LinkedBlockingQueue (LinkedList Implementation but Not Exactly JDK Implementation of LinkedList It uses static inner class Node to maintain Links between elements )
Constructor for LinkedBlockingQueue
public LinkedBlockingQueue(int capacity)
{
if (capacity < = 0) throw new IllegalArgumentException();
this.capacity = capacity;
last = head = new Node< E >(null); // Maintains a underlying linkedlist. ( Use when size is not known )
}
Node class Used to Maintain Links
static class Node<E> {
E item;
Node<E> next;
Node(E x) { item = x; }
}
3 . ArrayBlockingQueue ( Array Implementation )
Constructor for ArrayBlockingQueue
public ArrayBlockingQueue(int capacity, boolean fair)
{
if (capacity < = 0)
throw new IllegalArgumentException();
this.items = new Object[capacity]; // Maintains a underlying array
lock = new ReentrantLock(fair);
notEmpty = lock.newCondition();
notFull = lock.newCondition();
}
IMHO Biggest Difference between ArrayBlockingQueue and LinkedBlockingQueue is clear from constructor one has underlying data structure Array and other linkedList.
ArrayBlockingQueue uses single-lock double condition algorithm and LinkedBlockingQueue is variant of the "two lock queue" algorithm and it has 2 locks 2 conditions ( takeLock , putLock)
ConcurrentLinkedQueue is lock-free, LinkedBlockingQueue is not. Every time you invoke LinkedBlockingQueue.put() or LinkedBlockingQueue.take(), you need acquire the lock first. In other word, LinkedBlockingQueue has poor concurrency. If you care performance, try ConcurrentLinkedQueue + LockSupport.