Can Threadpoolexecutor switch its blockingQueue after start? - java

Can Threadpoolexecutor change its blockingqueue after start? I am using multiple threadpoolexecutors in my process. I don't want to breach the maximum number of threads beyond a certain number in my process. That is why I thought of the idea of switching blockingqueue of my threadpool to a more busy blockingqueue. But I don't see any function in ThreadpoolExecutor class which provides the facility of switching blockingqueues. What could be the reason behind this?

Apparently threadpoolexecutor gives access to its blockingqueue. I can achieve the same behavior by transfering tasks from one queue to another queue.

Immutable objects are usually favoured in modern programming practices. It usually make things... Simpler in regards to object model growth and future enhancements (And no, I don't consider python's approach of "Let's all be responsible adults" as modern for the sake of the argument).
As for solving your problem you.could perhaps pass a smart "Delegating" BlockingQueue implementation that'll implement the standard interface but back it with some queue switching mechanism, controlled internally or externally as your specification requires

Related

What is the advantage of CountDownlatch over wait/notify mechanism?

I have read this answer:
Difference between wait-notify and CountDownLatch
I know both process are different,
CountDownlatch is a new mechanism while wait/notify is a pristine
way of co coordinating between threads
wait is a method of Object, await is a method of CountDownlatch.
using CountDownlatch is easier and cleaner etc etc.
My question is more of the functional aspect:
Is there any situation which cannot be solved by wait/notify mechanism but can be solved only by CountDownLatch?
If no,then functionally, CountDownlatch was introduced solely to make coordination between threads easier and cleaner, right?
Sure you can create the same functionality just with wait, notify, synchronized and so on. CountDownLatch is a normal Java class implemented using such primitives. For details you can have a look at the actual source code: http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/util/concurrent/CountDownLatch.java
The classes in java.util.concurrent are designed to make certain multithreading scenarios more easier to code and manage. You can use low-level constructs such as wait and notify but you really need to know what you are doing.
Here is the excerpt from the API:
Utility classes commonly useful in concurrent programming. This
package includes a few small standardized extensible frameworks, as
well as some classes that provide useful functionality and are
otherwise tedious or difficult to implement.
Consider a case where you may not want to wait if a condition is met. You could get your hands dirty and probe a lock, but this is often buggy.
A CountDownLatch comes to the rescue, yes for convenience, but not solely to solve the wait/notify paradigm.
The obvious use of CountDownLatch as a way to wait for multiple conditions also comes to mind.
Why reinvent the wheel when it's available first party?

Java Concurrency: How to select and configure Executors

The Java Concurrency API gives you Executor and ExecutorService interfaces to build from, and ships with several concrete implementations (ThreadPoolExecutor and ScheduledThreadPoolExecutor).
I'm completely new to Java Concurrency, and am having difficulty finding answers to several very-similarly-related questions. Rather than cluttering SO with all these tiny questions I decided to bundle them together, because there's probably a way to answer them all in one fell swoop (probably because I'm not seeing the whole picture here):
Is it common practice to implement your own Executor/ExecutorService? In what cases would you do this instead of using the two concretions I mention above? In what cases are the two concretions preferable over something "homegrown"?
I don't understand how all of the concurrent collections relate to Executors. For instance, does ThreadPoolExecutor use, say, ConcurrentLinkedQueue under the hood to queue up submitted tasks? Or are you (the API developer) supposed to select and use, say, ConcurrentLinkedQueue inside your parallelized run() method? Basicaly, are the concurrent collections there to be used internally by the Executors, or do you use them to help write non-blocking algorithms?
Can you configure which concurrent collections an Executor uses under the hood (to store submitted tasks), and is this common practice?
Thanks in advance!
Is it common practice to implement your own Executor/ExecutorService?
No. I've never had to do this and I've been using the concurrency package for some time. The complexity of these classes and the performance implications around getting them "wrong" mean that you should really think carefully about it before undertaking such a project.
The only time that I felt the need to implement my own executor service was when I wanted to implement a "self-run" executor service. That was until a friend showed me that there was a way to do it with a RejectedExecutionHandler.
The only reason why I'd wanted to tweak the behavior of the ThreadPoolExecutor was to have it start all of the threads up to the max-threads and then stick the jobs into the queue. By default the ThreadPoolExecutor starts min-threads and then fills the queue before starting another thread. Not what I expect or want. But then I'd just be copying the code from the JDK and changing it -- not implementing it from scratch.
I don't understand how all of the concurrent collections relate to Executors. For instance, does ThreadPoolExecutor use, say, ConcurrentLinkedQueue under the hood to queue up submitted tasks?
If you are using one of the Executors helper methods then you don't have to worry about this. If you are instantiating ThreadPoolExecutor yourself then you provide the BlockingQueue to use.
public static ExecutorService newFixedThreadPool(int nThreads) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>());
}
Versus:
ExecutorService threadPool =
new ThreadPoolExecutor(minThreads, maxThreads, 0L, TimeUnit.MILLISECONDS,
new SynchronousQueue<Runnable>());
Can you configure which concurrent collections an Executor uses under the hood (to store submitted tasks), and is this common practice?
See the last answer.

Analysing a BlockingQueue usage example

I was looking at the "usage example based on a typical producer-consumer scenario" at:
http://download.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/BlockingQueue.html#put(E)
Is the example correct?
I think the put and take operations need a lock on some resource before proceeding to modify the queue, but that is not happening here.
Also, had this been a Concurrent kind of a queue, the lack of locks would have been understandable since atomic operations on a concurrent queue do not need locks.
I do not think there is something to add to what is written in api:
A Queue that additionally supports operations that wait for the queue to become non-empty when retrieving an element, and wait for space to become available in the queue when storing an element.
BlockingQueue implementations are thread-safe. All queuing methods achieve their effects atomically using internal locks or other forms of concurrency control.
BlockingQueue is just an interface. This implementation could be using synchronzed blocks, Lock or be lock-free. AFAIK most methods use Lock in the implementation.

Why is there no scheduled cached thread pool provided by the Java Executors class?

Executors provides newCachedThreadPool() and newScheduledThreadPool(), but not newCachedScheduledThreadPool(), what gives here? I have an application that receives bursty messages and needs to schedule a fairly lengthy processing step after a fixed delay for each. The time constraints aren't super tight, but I would prefer to have more threads created on the fly if I exceed the pool size and then have them trimmed back during periods of inactivity. Is there something I've missed in the concurrent library, or do I need to write my own?
By design the ScheduledThreadPoolExecutor is a fixed size. You can use a single threaded version that submits to a normal ExecutorService for performing the task. This event thread + worker pool is fairly ease to coordinate and the flexibility makes up for the dedicated thread. I've used this in the past to replace TimerTasks and other non-critical tasks to utilize a common executor as a system-wide pool.
Suggested here Why does ScheduledThreadPoolExecutor only accept a fixed number of threads? workaround:
scheduledExecutor = new ScheduledThreadPoolExecutor(128); //no more than 128 threads
scheduledExecutor.setKeepAliveTime(10, TimeUnit.SECONDS);
scheduledExecutor.allowCoreThreadTimeOut(true);
java.util.concurrent.Executors is nothing more than a collection of static convenience methods that construct common arrangements of executors.
If you want something specific that isn't offered by Executors, then feel free to construct your own instance of the implemention classes, using the examples in Executors as a guide.
Like skaffman says, Executors is only a collection of factory method. if you need a particular instance, you can always check all existing Executor implementors. In your case, i think that calling one of the various constructors of ScheduledThreadPoolExecutor would be a good idea.

Any practical example of LockSupport & AbstractQueuedSynchronizer use?

Guys, can anyone give a simple practical example of LockSupport & AbstractQueuedSynchronizer use? Example given in javadocs is quite strained.
Usage of Semaphore permits is understood by me.
Thanks for any response.
If youre talking about using a locking mechanism (or even sync barriers) just use a java.util.concurrent.Lock. The obvious suggestion is to user a ReentrantLock which delegates to a Synch. The synch is an AQS which in turn uses LockSupport.
Its all done under the covers for you.
Edit:
No let's go over the practical uses of AbstractQueuedSynchronizer (AQS).
Concurrency constructs though can be very different in their usage all can have the same underlying functions.
I.e. Under some condition park this thread. Under some other condition wake a thread up.
This is a very broad set of instructions but makes it obvious that most concurrency structures would need some common functionality that would be able to handle those operations for them. Enter AQS. There are five major synchronization barriers.
ReentrantLock
ReadLock
WriteLock
Semaphore
CountDownLatch
Now, all these five structures have very different set of rules when using them. A CountdownLatch can allow many threads to run at the same time but forces one (or more) threads to wait until at least n number of threads count down on said latch.
ReentrantLock forces only one thread at a time to enter a critical section and queues up all other threads to wait for it to completed.
ReadLock allows any number of reading threads into the critical section until a write lock is acquiered.
The examples can go on, but the big picture here is they all use AQS. This is because they are able to use the primitive functions that AQS offers and implements more complex functionality on top of it. AQS allows you to park unpark and wake up threads ( interruptibly if need be) but in such a way that you can support many complex functions.
they are not meant for direct use in client code; more for helping building new concurrent classes.
AQS is a wonderful class for building concurrency primitives – but it is complex and requires a bit of study to use it properly. I have used it for a few things like lazy initialisation and a simple fast reusable latch.
As complex as it is, I don't think AQS is particularly vague, it has excellent javadocs describing how to use it properly.
2.7 release of Disruptor uses LockSupport.parkNanos instead of Thread.sleep to reduce latency:
http://code.google.com/p/disruptor/
AFAIK, AbstractQueuedSynchronizer is used to manage state transitions. The JDK uses it to extend Sync, an internal class for java.util.concurrent.FutureTask. The Sync class manages the states (READY, RUNNING, RAN, and CANCELLED) of FutureTask and the transitions between them.
This allows, as you may know, FutureTask to block on FutureTask.get() until the RAN state is reached, for example.

Categories