I am looking for a thread safe infinite blocking fifo which is backed by a fixed buffer such as an array. The semantics would be that multiple readers and writer threads can accesses it safely. Writers block if the buffer is full and overwrite the oldest item. Readers block if the buffer is empty. Fifo ordering must be maintained when the counter of total added and total removed has wrapped around the internal buffer size one or many times.
Interestingly the usual places that I would look for this (java's own concurrent collections, commons collections, guava) don't see to have an instant answer to such an 'obvious' requirement.
You are actually describing an ArrayBlockingQueue.
It is thread-safe and has been designed for that exact purpose:
writers wait for space to become available if the queue is full
readers can wait up to a specified wait time if necessary for an element to become available
It sounds like you're looking for ArrayBlockingQueue.
What about java.util.concurrent.ArrayBlockingQueue
It's not clear if you are looking for infinity blocking queue or for bounded blocking queue.
Bounded blocking queue: java.util.concurrent.ArrayBlockingQueue
Infinity blocking queue(limited only by RAM constraint): java.util.concurrent.LinkedBlockingQueue
For all the cases I will suggest to use ArrayBlockingQueue.
For an infinite queue you would have to create you own class implementing the BlockingQueue interface using a queue delegate (maybe ArrayBlockingQueue) and when the queue runs full, adapt the size, creating a new and bigger delegate. This should be infinite up to MAX_INT and avoid the GC overhead involved with linked queues (which need to create nodes for each inserted object). If needed, you can shrink the delegate, too.
Related
I see from the java docs -
ThreadPoolExecutor(int corePoolSize,
int maximumPoolSize,
long keepAliveTime,
TimeUnit unit,
BlockingQueue<Runnable> workQueue,
RejectedExecutionHandler handler)
Where -
workQueue – the queue to use for holding tasks before they are executed. This queue will hold only the Runnable tasks submitted by the execute method.
Now java provides various type of blocking queues and the java doc clearly say when to use what type of queue with ThreadPoolExecutor-
Queuing
Any BlockingQueue may be used to transfer and hold submitted tasks. The use of this queue interacts with pool sizing:
If fewer than corePoolSize threads are running, the Executor always prefers adding a new thread rather than queuing.
If corePoolSize or more threads are running, the Executor always prefers queuing a request rather than adding a new thread.
If a request cannot be queued, a new thread is created unless this would exceed maximumPoolSize, in which case, the task will be rejected.
There are three general strategies for queuing:
Direct handoffs. A good default choice for a work queue is a SynchronousQueue that hands off tasks to threads without otherwise holding them. Here, an attempt to queue a task will fail if no threads are immediately available to run it, so a new thread will be constructed. This policy avoids lockups when handling sets of requests that might have internal dependencies. Direct handoffs generally require unbounded maximumPoolSizes to avoid rejection of new submitted tasks. This in turn admits the possibility of unbounded thread growth when commands continue to arrive on average faster than they can be processed.
Unbounded queues. Using an unbounded queue (for example a LinkedBlockingQueue without a predefined capacity) will cause new tasks to wait in the queue when all corePoolSize threads are busy. Thus, no more than corePoolSize threads will ever be created. (And the value of the maximumPoolSize therefore doesn't have any effect.) This may be appropriate when each task is completely independent of others, so tasks cannot affect each others execution; for example, in a web page server. While this style of queuing can be useful in smoothing out transient bursts of requests, it admits the possibility of unbounded work queue growth when commands continue to arrive on average faster than they can be processed.
Bounded queues. A bounded queue (for example, an ArrayBlockingQueue) helps prevent resource exhaustion when used with finite maximumPoolSizes, but can be more difficult to tune and control. Queue sizes and maximum pool sizes may be traded off for each other: Using large queues and small pools minimizes CPU usage, OS resources, and context-switching overhead, but can lead to artificially low throughput. If tasks frequently block (for example if they are I/O bound), a system may be able to schedule time for more threads than you otherwise allow. Use of small queues generally requires larger pool sizes, which keeps CPUs busier but may encounter unacceptable scheduling overhead, which also decreases throughput.
Below is my Question -
I have seen code usages as below -
BlockingQueue<Runnable> workQueue = new LinkedBlockingDeque<>(90);
ExecutorService executorService = new ThreadPoolExecutor(1, 10, 30,
TimeUnit.SECONDS, workQueue,
new ThreadPoolExecutor.CallerRunsPolicy());
So, as the Deque (in the above code) is anyway of fixed capacity. What advantage am I getting with LinkedBlockingDeque<>(90) when compared to below -
LinkedBlockingQueue<>(90) ; - just want to know about deque advantage over queue in this case not in general. How the Executor will benefit for a deque over a queue.
ArrayBlockingQueue<>(90) ; - (i see one can also mention fairness etc but this not of my current interest) So why not just use an Array over Deque (i.e when using a deque of fixed capacity).
LinkedBlockingQueue is an optionally-bounded blocking queue based on linked nodes. Its capacity is not limited.
ArrayBlockingQueue is bounded blocking queue in which a fixed-sized array holds elements.
In your case, there's no benefit anywhere. ArrayBlockingQueue may prove to be more efficient, as it uses fixed-size array in a single memory span.
Difference between Queue and Deque is just it's mechanism. Queue is LIFO while Deque is FIFO.
In LIFO the last task inserted is the last to be executed
In FIFO the last task inserted is the first one to be executed
Consider the following: You want your tasks to be executed as they come in? Use LIFO. You want your tasks to be executed the other way around? use FIFO.
The main benefit is when you're using the thread pool to execute some kind of a pipeline. As a rule of thumb, at each stage in a pipeline, the queue either is almost always empty (producer(s) tend(s) to be slower than the consumer(s)), or else the queue almost always is full (producer(s) tend(s) to be faster.)
If the producer(s) is/are faster, and if the application is meant to continue running indefinitely, then you need a fixed-size, blocking queue to put "back pressure" on the producers. If there was no back pressure, then the queue would continue to grow until eventually, some bad thing happened. (e.g., process runs out of memory, or system breaks down because "tasks" spend too much time delayed in the queues.)
I need to make a queue with n semaphores so processes that did not enter due to size, stand in the waiting pool until the queue is free. When process have semaphore, ThreadPool runs its function in another thread. I also need a concurrent list of semaphore-carrying processes' IDs so that it is updated along with the semaphore queue. How can I do this using modern Java 8 patterns?
It strikes me that there is a much simpler solution that doesn't involve explicit semaphores and custom code (and bugs).
Just use a bounded BlockingQueue (javadoc) and have the threads use put(...) to add items to the queue. When queue is full, put will block the thread that is calling it ... until queue space is available. If you don't want the thread to block indefinitely, use offer with a suitable timeout.
To begin with, I have used search and found n topics related to this question. Unfortunately, they didin't help me, so it'll be n++ topics :)
Situation: I will have few working threads (the same class, just many dublicates) (let's call them WT) and one result writing thread (RT).
WT will add objects to the Queue, and RT will take them. Since there will be many WT won't there be any memory problems(independant from the max queue size)? Will those operations wait for each other to be completed?
Moreover, as I understand, BlockingQueue is quite slow, so maybe I should leave it and use normal Queue while in synchronized blocks? Or should I consider my self by using SynchronizedQueue?
LinkedBlockingQueue is designed to handle multiple threads writing to the same queue. From the documentation:
BlockingQueue implementations are thread-safe. All queuing methods achieve their effects atomically using internal locks or other forms of concurrency control. However, the bulk Collection operations addAll, containsAll, retainAll and removeAll are not necessarily performed atomically unless specified otherwise in an implementation.
Therefore, you are quite safe (unless you expect the bulk operations to be atomic).
Of course, if thread A and thread B are writing to the same queue, the order of A's items relative to B's items will be indeterminate unless you synchronize A and B.
As to the choice of queue implementation, go with the simplest that does the job, and then profile. This will give you accurate data on where the bottlenecks are so you won't have to guess.
I was looking at the "usage example based on a typical producer-consumer scenario" at:
http://download.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/BlockingQueue.html#put(E)
Is the example correct?
I think the put and take operations need a lock on some resource before proceeding to modify the queue, but that is not happening here.
Also, had this been a Concurrent kind of a queue, the lack of locks would have been understandable since atomic operations on a concurrent queue do not need locks.
I do not think there is something to add to what is written in api:
A Queue that additionally supports operations that wait for the queue to become non-empty when retrieving an element, and wait for space to become available in the queue when storing an element.
BlockingQueue implementations are thread-safe. All queuing methods achieve their effects atomically using internal locks or other forms of concurrency control.
BlockingQueue is just an interface. This implementation could be using synchronzed blocks, Lock or be lock-free. AFAIK most methods use Lock in the implementation.
I have the classic problem of a thread pushing events to the incoming queue of a second thread. Only this time, I am very interested about performance. What I want to achieve is:
I want concurrent access to the queue, the producer pushing, the receiver poping.
When the queue is empty, I want the consumer to block to the queue, waiting for the producer.
My first idea was to use a LinkedBlockingQueue, but I soon realized that it is not concurrent and the performance suffered. On the other hand, I now use a ConcurrentLinkedQueue, but still I am paying the cost of wait() / notify() on each publication. Since the consumer, upon finding an empty queue, does not block, I have to synchronize and wait() on a lock. On the other part, the producer has to get that lock and notify() upon every single publication. The overall result is that I am paying the cost of
sycnhronized (lock) {lock.notify()} in every single publication, even when not needed.
What I guess is needed here, is a queue that is both blocking and concurrent. I imagine a push() operation to work as in ConcurrentLinkedQueue, with an extra notify() to the object when the pushed element is the first in the list. Such a check I consider to already exist in the ConcurrentLinkedQueue, as pushing requires connecting with the next element. Thus, this would be much faster than synchronizing every time on the external lock.
Is something like this available/reasonable?
I think you can stick to java.util.concurrent.LinkedBlockingQueue regardless of your doubts. It is concurrent. Though, I have no idea about its performance. Probably, other implementation of BlockingQueue will suit you better. There's not too many of them, so make performance tests and measure.
Similar to this answer https://stackoverflow.com/a/1212515/1102730 but a bit different.. I ended up using an ExecutorService. You can instantiate one by using Executors.newSingleThreadExecutor(). I needed a concurrent queue for reading/writing BufferedImages to files, as well as atomicity with reads and writes. I only need a single thread because the file IO is orders of magnitude faster than the source, net IO. Also, I was more concerned about atomicity of actions and correctness than performance, but this approach can also be done with multiple threads in the pool to speed things up.
To get an image (Try-Catch-Finally omitted):
Future<BufferedImage> futureImage = executorService.submit(new Callable<BufferedImage>() {
#Override
public BufferedImage call() throws Exception {
ImageInputStream is = new FileImageInputStream(file);
return ImageIO.read(is);
}
})
image = futureImage.get();
To save an image (Try-Catch-Finally omitted):
Future<Boolean> futureWrite = executorService.submit(new Callable<Boolean>() {
#Override
public Boolean call() {
FileOutputStream os = new FileOutputStream(file);
return ImageIO.write(image, getFileFormat(), os);
}
});
boolean wasWritten = futureWrite.get();
It's important to note that you should flush and close your streams in a finally block. I don't know about how it performs compared to other solutions, but it is pretty versatile.
I would suggest you look at ThreadPoolExecutor newSingleThreadExecutor. It will handle keeping your tasks ordered for you, and if you submit Callables to your executor, you will be able to get the blocking behavior you are looking for as well.
You can try LinkedTransferQueue from jsr166: http://gee.cs.oswego.edu/cgi-bin/viewcvs.cgi/jsr166/src/jsr166y/
It fulfills your requirements and have less overhead for offer/poll operations.
As I can see from the code, when the queue is not empty, it uses atomic operations for polling elements. And when the queue is empty, it spins for some time and park the thread if unsuccessful.
I think it can help in your case.
I use the ArrayBlockingQueue whenever I need to pass data from one thread to another. Using the put and take methods (which will block if full/empty).
Here is a list of classes implementing BlockingQueue.
I would recommend checking out SynchronousQueue.
Like #Rorick mentioned in his comment, I believe all of those implementations are concurrent. I think your concerns with LinkedBlockingQueue may be out of place.