What's the purpose of the PriorityBlockingQueue? - java

I've been playing with blocking queues and PriorityQueue, and it got me thinking. I can't see a good usecase for PriorityBlockingQueue. The point of a priority queue is to sort the values put into it before they're retrieved. A blocking queue implies that values are inserted into it and retrieved from it concurrently. But, if that's the case, you'd never be able to guarantee the sort order.
BlockingQueue<Integer> q = new PriorityBlockingQueue<>();
new Thread (()->{ randomSleep(); q.put(2); randomSleep(); q.put(0); }).start();
new Thread (()->{ randomSleep(); q.put(3); randomSleep(); q.put(1); }).start();
ArrayList<Integer> ordered = new ArrayList<>(4);
for (int i = 0; i < 4; i++) {
randomSleep();
ordered.add(q.take());
}
System.out.println(ordered);
In this example, the order in which the main thread gets the offered values is quite random, which seems to defeat the purpose of a priority queue. Even with a single producer and single consumer, the order can not be ensured.
So, what is the use of PriorityBlockingQueue then?

In this example, the order in which the main thread gets the offered
values is quite random
Well you have a race-condition during the insertion and the retrieve of those elements. Hence, the reason why it looks random.
Nonetheless, you could use for instance the PriorityBlockingQueue to sequentially field up with some elements (or tasks) that need to be pick up by multiple threads in parallel by their highest priority element/task. In such case you can take advantage of the thread-safe properties of a structure that guarantees you that the the highest priority element is always ordered first.
One example would be a Queue of tasks in which those tasks have a priority, and you want those same tasks to be processed in parallel.

Related

Preventing list objects from being processed twice when using a java threadpool

Lets say i have a list of 10,000 objects
ArrayList<String> al=new ArrayList<String>();
al.add("1");
al.add("2");
al.add("..");
al.add("10000");
I want to process the 10,000 objects using a thread pool with 20 threads.The goal is to ensure that my program reads every object exactly once.
Since the program wont be marking that a list object has been read, am i guaranteed that every object shall only be processed once?.
I have an idea and it may indeed be stupid. Since you are only attempting to read objects in a list, how about if you apply this strategy.
You have 10000 elements in list.
You have 20 Threads.
Each thread picks 500 elements.
For each thread you assign an integer id like 1 to 20.
Each thread accesses elements based upon their integer id.
Like thread 1 access from 0-499.
Similarly thread 2 access from 500-999 and so on.
This will guarantee you that no element will be read by multiple threads.
One assumption here, that all threads will be doing similar kind of processing on elements.
And in another approach, what you could do is, create a Synchronized Set and everytime you pick an element, check if the index is present in the set or not and if not present, pick the element and insert its index into the set. This way you will not pick an element twice.
U can using this code :
ExecutorService executorService = Executors.newFixedThreadPool(20);
executorService.execute(new Runnable() {
public void run() {
//add item in here and remember using sync data
}
});
executorService.shutdown();
Divide the list into 20 parts:
Map<Integer, List<String>> mapList = al.stream().collect(Collectors.groupingBy(i -> i.hashCode() % 20));

Processing sub-streams of a stream in Java using executors

I have a program that processes a huge stream (not in the sense of java.util.stream, but rather InputStream) of data coming in through the network. The stream consists of objects, each having a sort of sub-stream identifier. Right now the whole processing is done in a single thread, but it takes a lot of CPU time and each sub-stream can easily be processed independently, so I'm thinking of multi-threading it.
However, each sub-stream requires to keep a lot of bulky state, including various buffers, hash maps and such. There is no particular reason to make it concurrent or synchronized since sub-streams are independent of each other. Moreover, each sub-stream requires that its objects are processed in the order they arrive, which means that probably there should be a single thread for each sub-stream (but possibly one thread processing multiple sub-streams).
I'm thinking of several approaches to this, but they are not quite elegant.
Create a single ThreadPoolExecutor for all tasks. Each task will contain the next object to process and the reference to a Processor instance which keeps all the state. That would ensure the necessary happens-before relationship thus ensuring that the processing thread will see the up-to-date state for this sub-stream. This approach has no way to make sure that the next object of the same sub-stream will be processed in the same thread, as far as I can see. Moreover, it needs some guarantee that objects will be processed in the order they come in, which will require additional synchronization of the Processor objects, introducing unnecessary delays.
Create multiple single-thread executors manually and a sort of hash-map that maps sub-stream identifiers to executor. This approach requires manual management of executors, creating or shutting down them as new sub-streams begin or end, and distributing the tasks between them accordingly.
Create a custom executor that processes a special subclass of tasks each having a sub-stream ID. This executor would use it as a hint to use the same thread for executing this task as the previous one with the same ID. However, I don't see an easy way to implement such executor. Unfortunately, it doesn't seem possible to extend any of the existing executor classes, and implementing an executor from scratch is kind of overkill.
Create a single ThreadPoolExecutor, but instead of creating a task for each incoming object, create a single long-running task for each sub-stream that would block in a concurrent queue, waiting for the next object. Then put objects in queues according to their sub-stream IDs. This approach needs as many threads as there are sub-streams because the tasks will be blocked. The expected number of sub-streams is about 30-60, so that may be acceptable.
Alternatively, proceed as in 4, but limit the number of threads, assigning multiple sub-streams to a single task. This is sort of a hybrid between 2 and 4. As far as I can see, this is the best approach of these, but it still requires some sort of manual sub-stream distribution between tasks and some way to shut the extra tasks down as sub-streams end.
What would be the best way to ensure that each sub-stream is processed in its own thread without a lot of error-prone code? So that the following pseudo-code will work:
// loop {
Item next = stream.read();
int id = next.getSubstreamID();
Processor processor = getProcessor(id);
SubstreamTask task = new SubstreamTask(processor, next, id);
executor.submit(task); // This makes sure that the task will
// be executed in the same thread as the
// previous task with the same ID.
// } // loop
I suggest having an array of single threaded executors. If you can devise a consistent hashing strategy for sub-streams, you can map sub-streams to individual threads. e.g.
final ExecutorsService[] es = ...
public void submit(int id, Runnable run) {
es[(id & 0x7FFFFFFF) % es.length].submit(run);
}
The key could be an String or long but some way to identify the sub-stream. If you know a particular sub-stream is very expensive, you could assign it a dedicated thread.
The solution I finally chose looks like this:
private final Executor[] streamThreads
= new Executor[Runtime.getRuntime().availableProcessors()];
{
for (int i = 0; i < streamThreads.length; ++i) {
streamThreads[i] = Executors.newSingleThreadExecutor();
}
}
private final ConcurrentHashMap<SubstreamId, Integer>
threadById = new ConcurrentHashMap<>();
This code determines which executor to use:
Message msg = in.readNext();
SubstreamId msgSubstream = msg.getSubstreamId();
int exe = threadById.computeIfAbsent(msgSubstream,
id -> findBestExecutor());
streamThreads[exe].execute(() -> {
// processing goes here
});
And the findBestExecutor() function is this:
private int findBestExecutor() {
// Thread index -> substream count mapping:
final int[] loads = new int[streamThreads.length];
for (int thread : threadById.values()) {
++loads[thread];
}
// return the index of the minimum load
return IntStream.range(0, streamThreads.length)
.reduce((i, j) -> loads[i] <= loads[j] ? i : j)
.orElse(0);
}
This is, of course, not very efficient, but note that this function is only called when a new sub-stream shows up (which happens several times every few hours, so it's not a big deal in my case). My real code looks a bit more complicated because I have a way to determine whether two sub-streams are likely to finish simultaneously, and if they are, I try to assign them to different threads in order to maintain even load after they do finish. But since I never mentioned this detail in the question, I guess it doesn't belong to the answer either.

Two threads transferring data in both ways between two LinkedConcurrentQueue results in one empty queue while another "steals" everything

Everyone!
I've wrote a class (InAndOut) that extends Thread. This class receives in the constructor two LinkedConcurrentQueue, entrance and exit, and my run method transfers the objets from entrance to exit.
In my main method, I've instanciate two LinkedConcurrentQueue, myQueue1 and myQueue2, with some values in each. Then, I've instanciate two InAndOut, one receiving myQueue1 (entrance) and myQueue2 (exit) and another receiving myQueue2 (entrance) and myQueue1 (exit). Then, I call the start method of both instances.
The result, after some iterations, is the transference of all objects from a queue to another, in other words, myQueue1 becomes empty and myQueue2 "steals" all the objects. But, if I add a sleep call in each iteration (something like 100 ms), then the behavior is like I've expected (equilibrium between element number in both queues).
Why it's happening and how to fix it? There are some way to do not use this sleep call in my run method? Am I doing something wrong?
Here is my source code:
import java.util.concurrent.ConcurrentLinkedQueue;
class InAndOut extends Thread {
ConcurrentLinkedQueue<String> entrance;
ConcurrentLinkedQueue<String> exit;
String name;
public InAndOut(String name, ConcurrentLinkedQueue<String> entrance, ConcurrentLinkedQueue<String> exit){
this.entrance = entrance;
this.exit = exit;
this.name = name;
}
public void run() {
int it = 0;
while(it < 3000){
String value = entrance.poll();
if(value != null){
exit.offer(value);
System.err.println(this.name + " / entrance: " + entrance.size() + " / exit: " + exit.size());
}
//THIS IS THE SLEEP CALL THAT MAKES THE CODE WORK AS EXPECTED
try{
this.sleep(100);
} catch (Exception ex){
}
it++;
}
}
}
public class Main {
public static void main(String[] args) {
ConcurrentLinkedQueue<String> myQueue1 = new ConcurrentLinkedQueue<String>();
ConcurrentLinkedQueue<String> myQueue2 = new ConcurrentLinkedQueue<String>();
myQueue1.offer("a");
myQueue1.offer("b");
myQueue1.offer("c");
myQueue1.offer("d");
myQueue1.offer("e");
myQueue1.offer("f");
myQueue1.offer("g");
myQueue1.offer("h");
myQueue1.offer("i");
myQueue1.offer("j");
myQueue1.offer("k");
myQueue1.offer("l");
myQueue2.offer("m");
myQueue2.offer("n");
myQueue2.offer("o");
myQueue2.offer("p");
myQueue2.offer("q");
myQueue2.offer("r");
myQueue2.offer("s");
myQueue2.offer("t");
myQueue2.offer("u");
myQueue2.offer("v");
myQueue2.offer("w");
InAndOut es = new InAndOut("First", myQueue1, myQueue2);
InAndOut es2 = new InAndOut("Second", myQueue2, myQueue1);
es.start();
es2.start();
}
}
Thanks in advance!
Even if thread scheduling was deterministic the observed behavior remained plausible. As long as both threads perform the same task they might run balanced though you cannot rely on. But as soon as one queue runs empty the tasks are not balanced anymore. Compare:
Thread one polls from a queue which has items. The poll method will modify the source queue's state to reflect the removal, your code inserts the received item into the other queue, creating an internal list node object and modifying the target queue’s state to reflect the insertion. All modifications are performed in a way visible to other threads.
Thread two polls from an empty queue. The poll method checks a reference and finds null and that’s all. No other action is performed.
I think it should be obvious that one thread has far more to do than the other once one queue went empty. More precisely, one thread can finish its 3000 loop iterations (it could even do 300000) in a time that is not enough for the other to perform even a single iteration.
So once one queue is empty, one thread finishes its loop almost immediately and after that the other thread will transfer all items from one queue to the other and finish afterwards too.
So even with an almost deterministic scheduling behavior the balance would always bear the risk of tilting once one queue happens to get empty.
You can raise the chance for a balanced run by adding far more items to the queue to reduce the likelihood of one queue running empty. You can raise the number of iterations (to far bigger than a million) to avoid a thread exiting immediately when the queue runs empty or increment the counter only if a non-null item has been seen. You can use a CountDownLatch to let both threads wait before entering the loop compensating the thread startup overhead to have them run as synchronous as possible.
However, keep in mind that it still remains non-deterministic and polling loops waste CPU resources. Bot it’s ok to try and learn.
The order of execution with threads is undefined, so anything could happen. However since you do not start both threads simultaneously, you can make some assumptions on what might happen:
es is started first, so given a fast enough CPU, it has already pushed everything from queue1 into queue2 before the start of es2, then goes to sleep on take.
es2 starts and puts 1 element from queue2 back to queue1.
es wakes up at the same time and puts the element back.
Since both threads should "about" work at the same speed, one likely result is that there is only 1 or no element in es and all the remaining one in es2.
jtahlborn is exactly right when he says that multithreading is non-deterministic and as such I would suggest you read more into what your expectations are in this application because it isn't quite clear and it is functioning as I would expect it (based on how it's coded).
With that said, you may be looking for a BlockingQueue and not a ConcurrentLinkedQueue. A blocking queue will suspend the thread if empty and wait for it to have an items in it before continuing. Swap out ConcurrentLinkedQueue with LinkedBlockingQueue.
The difference between the two is that if ConcurrentLinkedQueue doesn't have an item it will return quickly with a null value so it can finish through the 3000 iterations very very quickly.

Java Iterator Concurrency

I'm trying to loop over a Java iterator concurrently, but am having troubles with the best way to do this.
Here is what I have where I don't try to do anything concurrently.
Long l;
Iterator<Long> i = getUserIDs();
while (i.hasNext()) {
l = i.next();
someObject.doSomething(l);
anotheObject.doSomething(l);
}
There should be no race conditions between the things I'm doing on the non iterator objects, so I'm not too worried about that. I'd just like to speed up how long it takes to loop through the iterator by not doing it sequentially.
Thanks in advance.
One solution is to use an executor to parallelise your work.
Simple example:
ExecutorService executor = Executors.newCachedThreadPool();
Iterator<Long> i = getUserIDs();
while (i.hasNext()) {
final Long l = i.next();
Runnable task = new Runnable() {
public void run() {
someObject.doSomething(l);
anotheObject.doSomething(l);
}
}
executor.submit(task);
}
executor.shutdown();
This will create a new thread for each item in the iterator, which will then do the work. You can tune how many threads are used by using a different method on the Executors class, or subdivide the work as you see fit (e.g. a different Runnable for each of the method calls).
A can offer two possible approaches:
Use a thread pool and dispatch the items received from the iterator to a set of processing threads. This will not accelerate the iterator operations themselves, since those would still happen in a single thread, but it will parallelize the actual processing.
Depending on how the iteration is created, you might be able to split the iteration process to multiple segments, each to be processed by a separate thread via a different Iterator object. For an example, have a look at the List.sublist(int fromIndex, int toIndex) and List.listIterator(int index) methods.
This would allow the iterator operations to happen in parallel, but it is not always possible to segment the iteration like this, usually due to the simple fact that the items to be iterated over are not immediately available.
As a bonus trick, if the iteration operations are expensive or slow, such as those required to access a database, you might see a throughput improvement if you separate them out to a separate thread that will use the iterator to fill in a BlockingQueue. The dispatcher thread will then only have to access the queue, without waiting on the iterator object to retrieve the next item.
The most important advice in this case is this: "Use your profiler", usually to be followed by "Do not optimise prematurely". By using a profiler, such as VisualVM, you should be able to ascertain the exact cause of any performance issues, without taking shots in the dark.
If you are using Java 7, you can use the new fork/join; see the tutorial.
Not only does it split automatically the tasks among the threads, but if some thread finishes its tasks earlier than the other threads, it "steals" some tasks from the other threads.

Which concurrent Queue implementation should I use in Java?

From the JavaDocs:
A ConcurrentLinkedQueue is an appropriate choice when many threads will share access to a common collection. This queue does not permit null elements.
ArrayBlockingQueue is a classic "bounded buffer", in which a fixed-sized array holds elements inserted by producers and extracted by consumers. This class supports an optional fairness policy for ordering waiting producer and consumer threads
LinkedBlockingQueue typically have higher throughput than array-based queues but less predictable performance in most concurrent applications.
I have 2 scenarios, one requires the queue to support many producers (threads using it) with one consumer and the other is the other way around.
I do not understand which implementation to use. Can somebody explain what the differences are?
Also, what is the 'optional fairness policy' in the ArrayBlockingQueue?
ConcurrentLinkedQueue means no locks are taken (i.e. no synchronized(this) or Lock.lock calls). It will use a CAS - Compare and Swap operation during modifications to see if the head/tail node is still the same as when it started. If so, the operation succeeds. If the head/tail node is different, it will spin around and try again.
LinkedBlockingQueue will take a lock before any modification. So your offer calls would block until they get the lock. You can use the offer overload that takes a TimeUnit to say you are only willing to wait X amount of time before abandoning the add (usually good for message type queues where the message is stale after X number of milliseconds).
Fairness means that the Lock implementation will keep the threads ordered. Meaning if Thread A enters and then Thread B enters, Thread A will get the lock first. With no fairness, it is undefined really what happens. It will most likely be the next thread that gets scheduled.
As for which one to use, it depends. I tend to use ConcurrentLinkedQueue because the time it takes my producers to get work to put onto the queue is diverse. I don't have a lot of producers producing at the exact same moment. But the consumer side is more complicated because poll won't go into a nice sleep state. You have to handle that yourself.
Basically the difference between them are performance characteristics and blocking behavior.
Taking the easiest first, ArrayBlockingQueue is a queue of a fixed size. So if you set the size at 10, and attempt to insert an 11th element, the insert statement will block until another thread removes an element. The fairness issue is what happens if multiple threads try to insert and remove at the same time (in other words during the period when the Queue was blocked). A fairness algorithm ensures that the first thread that asks is the first thread that gets. Otherwise, a given thread may wait longer than other threads, causing unpredictable behavior (sometimes one thread will just take several seconds because other threads that started later got processed first). The trade-off is that it takes overhead to manage the fairness, slowing down the throughput.
The most important difference between LinkedBlockingQueue and ConcurrentLinkedQueue is that if you request an element from a LinkedBlockingQueue and the queue is empty, your thread will wait until there is something there. A ConcurrentLinkedQueue will return right away with the behavior of an empty queue.
Which one depends on if you need the blocking. Where you have many producers and one consumer, it sounds like it. On the other hand, where you have many consumers and only one producer, you may not need the blocking behavior, and may be happy to just have the consumers check if the queue is empty and move on if it is.
Your question title mentions Blocking Queues. However, ConcurrentLinkedQueue is not a blocking queue.
The BlockingQueues are ArrayBlockingQueue, DelayQueue, LinkedBlockingDeque, LinkedBlockingQueue, PriorityBlockingQueue, and SynchronousQueue.
Some of these are clearly not fit for your purpose (DelayQueue, PriorityBlockingQueue, and SynchronousQueue). LinkedBlockingQueue and LinkedBlockingDeque are identical, except that the latter is a double-ended Queue (it implements the Deque interface).
Since ArrayBlockingQueue is only useful if you want to limit the number of elements, I'd stick to LinkedBlockingQueue.
ArrayBlockingQueue has lower memory footprint, it can reuse element node, not like LinkedBlockingQueue that have to create a LinkedBlockingQueue$Node object for each new insertion.
SynchronousQueue ( Taken from another question )
SynchronousQueue is more of a handoff, whereas the LinkedBlockingQueue just allows a single element. The difference being that the put() call to a SynchronousQueue will not return until there is a corresponding take() call, but with a LinkedBlockingQueue of size 1, the put() call (to an empty queue) will return immediately. It's essentially the BlockingQueue implementation for when you don't really want a queue (you don't want to maintain any pending data).
LinkedBlockingQueue (LinkedList Implementation but Not Exactly JDK Implementation of LinkedList It uses static inner class Node to maintain Links between elements )
Constructor for LinkedBlockingQueue
public LinkedBlockingQueue(int capacity)
{
if (capacity < = 0) throw new IllegalArgumentException();
this.capacity = capacity;
last = head = new Node< E >(null); // Maintains a underlying linkedlist. ( Use when size is not known )
}
Node class Used to Maintain Links
static class Node<E> {
E item;
Node<E> next;
Node(E x) { item = x; }
}
3 . ArrayBlockingQueue ( Array Implementation )
Constructor for ArrayBlockingQueue
public ArrayBlockingQueue(int capacity, boolean fair)
{
if (capacity < = 0)
throw new IllegalArgumentException();
this.items = new Object[capacity]; // Maintains a underlying array
lock = new ReentrantLock(fair);
notEmpty = lock.newCondition();
notFull = lock.newCondition();
}
IMHO Biggest Difference between ArrayBlockingQueue and LinkedBlockingQueue is clear from constructor one has underlying data structure Array and other linkedList.
ArrayBlockingQueue uses single-lock double condition algorithm and LinkedBlockingQueue is variant of the "two lock queue" algorithm and it has 2 locks 2 conditions ( takeLock , putLock)
ConcurrentLinkedQueue is lock-free, LinkedBlockingQueue is not. Every time you invoke LinkedBlockingQueue.put() or LinkedBlockingQueue.take(), you need acquire the lock first. In other word, LinkedBlockingQueue has poor concurrency. If you care performance, try ConcurrentLinkedQueue + LockSupport.

Categories