How can I trigger heapification of a PriorityQueue? - java

I'm using the following code for a PriorityQueue<Node<T>>, where Node<T> is not Comparable:
final Map<Node<T>, Double> distances = new HashMap<>();
PriorityQueue<Node<T>> queue = new PriorityQueue<Node<T>>(graph
.getNodes().size(), new Comparator<Node<T>>() {
#Override
public int compare(Node<T> o1, Node<T> o2) {
return distances.get(o1).compareTo(distances.get(o2));
}
});
Later in my code, I modify the distances of the nodes in the map with distances.put(...). How can I ensure that the priority queue is updated correctly to reflect the new sort order?
I've looked at the source for PriorityQueue and see that its peek, poll, and element methods all just get queue[0], but I don't know how to update the order of the queue, as the internal method heapify is private.

As far as I know you can't with the default implementation. However, you can work around it. This depends on what you are using the priority queue for:
For updating only when the priority of a node has increased
For updating both when the priority of a node has increased, and decreased
Case 1 is a much simpler case, because the PriorityQueue already maintains the priority order for you. Simply add the node with the higher priority to the queue, and keep track of whether a node has been popped from the Priority Queue before. (using a boolean array or a hashtable)
When you pop a node, if it hasn't been popped before, you can guarantee that it is the one in the queue with the lowest priority, even if you have added multiple copies to the queue previously.
Case 2 is trickier and will require use of a boolean variable within each node to track if it's "enabled". Also, you need a hashtable or array so you can access the node in the priority queue. When you want to update a node's priority, you have to "disable" the existing node in the queue while adding the new one in. Also keep track of the one you just added as the "new" existing node. When you pop, just ignore nodes that are "disabled".
As for time complexity, amortized cost will still be O(log n) for adding. This is because the cost of updating a node's position in the heap in the "heapify" is O(log n), which is asymptotically equal to calling offer. Time cost for pop will increase depending on the ratio of adding duplicate nodes vs valid nodes.
Alternatively, write your own updating priority queue.

if(!priorityQueue.isEmpty()) {
priorityQueue.add(priorityQueue.remove());
}
should be a simple way to trigger re-heapification.
As seen here

Related

Dynamic Sorting Queue Java [duplicate]

I need to implement a priority queue where the priority of an item in the queue can change and the queue adjusts itself so that items are always removed in the correct order. I have some ideas of how I could implement this but I'm sure this is quite a common data structure so I'm hoping I can use an implementation by someone smarter than me as a base.
Can anyone tell me the name of this type of priority queue so I know what to search for or, even better, point me to an implementation?
Priority queues such as this are typically implemented using a binary heap data structure as someone else suggested, which usually is represented using an array but could also use a binary tree. It actually is not hard to increase or decrease the priority of an element in the heap. If you know you are changing the priority of many elements before the next element is popped from the queue you can temporarily turn off dynamic reordering, insert all of the elements at the end of the heap, and then reorder the entire heap (at a cost of O(n)) just before the element needs to be popped. The important thing about heaps is that it only costs O(n) to put an array into heap order but O(n log n) to sort it.
I have used this approach successfully in a large project with dynamic priorities.
Here is my implementation of a parameterized priority queue implementation in the Curl programming language.
A standard binary heap supports 5 operations (the example below assume a max heap):
* find-max: return the maximum node of the heap
* delete-max: removing the root node of the heap
* increase-key: updating a key within the heap
* insert: adding a new key to the heap
* merge: joining two heaps to form a valid new heap containing all the elements of both.
As you can see, in a max heap, you can increase an arbitrary key. In a min heap you can decrease an arbitrary key. You can't change keys both ways unfortunately, but will this do? If you need to change keys both ways then you might want to think about using a a min-max-heap.
I would suggest first trying the head-in approach, to update a priority:
delete the item from the queue
re-insert it with the new priority
In C++, this could be done using a std::multi_map, the important thing is that the object must remember where it is stored in the structure to be able to delete itself efficiently. For re-insert, it's difficult since you cannot presume you know anything about the priorities.
class Item;
typedef std::multi_map<int, Item*> priority_queue;
class Item
{
public:
void add(priority_queue& queue);
void remove();
int getPriority() const;
void setPriority(int priority);
std::string& accessData();
const std::string& getData() const;
private:
int mPriority;
std::string mData;
priority_queue* mQueue;
priority_queue::iterator mIterator;
};
void Item::add(priority_queue& queue)
{
mQueue = &queue;
mIterator = queue.insert(std::make_pair(mPriority,this));
}
void Item::remove()
{
mQueue.erase(mIterator);
mQueue = 0;
mIterator = priority_queue::iterator();
}
void Item::setPriority(int priority)
{
mPriority = priority;
if (mQueue)
{
priority_queue& queue = *mQueue;
this->remove();
this->add(queue);
}
}
I am looking for just exactly the same thing!
And here is some of my idea:
Since a priority of an item keeps changing,
it's meaningless to sort the queue before retrieving an item.
So, we should forget using a priority queue. And "partially" sort the
container while retrieving an item.
And choose from the following STL sort algorithms:
a. partition
b. stable_partition
c. nth_element
d. partial_sort
e. partial_sort_copy
f. sort
g. stable_sort
partition, stable_partition and nth_element are linear-time sort algorithms, which should be our 1st choices.
BUT, it seems that there is no those algorithms provided in the official Java library. As a result, I will suggest you to use java.util.Collections.max/min to do what you want.
Google has a number of answers for you, including an implementation of one in Java.
However, this sounds like something that would be a homework problem, so if it is, I'd suggest trying to work through the ideas yourself first, then potentially referencing someone else's implementation if you get stuck somewhere and need a pointer in the right direction. That way, you're less likely to be "biased" towards the precise coding method used by the other programmer and more likely to understand why each piece of code is included and how it works. Sometimes it can be a little too tempting to do the paraphrasing equivalent of "copy and paste".

Persistent Queue datastructure

I need to make a class Persistent queue in which the function enqueue takes an element enqueues it to the current queue and return the new queue. How ever the original queue remains unchanged. Similarly the dequeue function removes the front element an returns the new queue but the original queue remains unchanged. This can of course be done in O(length of queue) but can i do it faster.??
You can use a linked list as queue (not LinkedList, but your own implementation).
To add a new element you only have to create a new instance of your queue class, set its start element to the copied queue's start element, and create a new end element.
Removing an element is similar, but set the end element of the new queue to the second to last element of the copied queue.
The queue class could look like this:
static class Node {
final Node next;
final Object o;
Node(Node next, Object o) {...}
}
final Node start, end;
private Queue(Node start, Node end) {...}
public Queue(Object o) {
start = end = new Node(null, o);
}
public Queue add(Object o) {
return new Queue(start, new Node(end, o));
}
public Queue remove() {
return new Queue(start, end.next);
}
The complexity of this queue's add and remove methods is O(1).
Please note that you can only iterate this queue in reverse order (i.e. newest elements first). Maybe you can come up with something that can be iterated the other way around or even in both directions.
I suggest to have a look to scala implementation. Comment at the top of the class describes the chosen approach (complexity : O(1)).
Queue is implemented as a pair of Lists, one containing the ''in'' elements and the other the ''out'' elements.
Elements are added to the ''in'' list and removed from the ''out'' list. When the ''out'' list runs dry, the
queue is pivoted by replacing the ''out'' list by ''in.reverse'', and ''in'' by ''Nil''.
Adding items to the queue always has cost O(1). Removing items has cost O(1), except in the case
where a pivot is required, in which case, a cost of O(n) is incurred, where n is the number of elements in the queue. When this happens,
n remove operations with O(1) cost are guaranteed. Removing an item is on average O(1).
http://xuwei-k.github.io/scala-library-sxr/scala-library-2.10.0/scala/collection/immutable/Queue.scala.html
What I do is use Java Chronicle (disclaimer: which I wrote). This is an unbounded off heap persisted queue which is stored on disk or tmpfs (shared memory).
In this approach your consumer keeps track of where it is up to in the queue, but no entry is actually removed (except in a daily or weekly maintenance cycle)
This avoids the need to alter the queue except when adding to it, and you would not need to copy it.
As such maintaining multiple references to where each consumer believes is the tail of the queue is O(1) for each consumer.
As Chronicle uses a compact binary form, the limit to how much you can store is limited by your disk space. e.g. a 2 TB drive can store this much data even on a machine with 8 GB before you have to rotate the queue logs.

Java - inserting an element in the middle or anywhere other than front/end of a Deque

It looks to me that there is no way to insert an element somewhere in the middle of a Deque class in O(1) time. I want to maintain a reference to a particular node in the deque in say a hash table and if I need to remove this node, I just go to its prev and set the prev.next=this.next and similarly this.next.prev=prev and remove this current elem.
But if I have a deque as
Deque<String> myDeque = new ArrayDeque<String>();
or
Deque<String> myDeque = new LinkedList<String>();
none of these would provide this.
Is there an alternative to this? If I have to implement my own Doubly linked list, is there a way that I can do away by just extending what ArrayDeque already does so I don't have to rewrite the code for insert etc? ...well as far as i know...i don't think so : ( : (
This is not possible without writing your own Deque. However:
If I understand correctly you want O(1) removal and insertion at one specific point in an object that otherwise has the interface of a Deque? May I suggest that you use two Deques?
Insertion and removal are at first only in one of the Deques, untill you come across the node you want to save. At that point, depending on wether you insert this node at the front or at the back you don't do that, but insert the node in the empty Deque, and that empty Deque becomes the target for either all your insertions and removals at the front or the back from then on, and the other Deque only handles insertions and removals at the back or front.
This would scale maybe to a few key nodes (leading to [number of nodes]+1 Deques used), with only insertions/removals happening at the front of one Deque and at the back of one other Deque (all others are static). You would also have to introduce a fixed convention of wether the first or the last item in each Deque (except the first or the last Deque) is a "key" node.
Of course if you have random insertion and removal at many points the question becomes: Why do you insist on a Deque?
I wouldn't remove the node. Instead when you take it from the Deque I would check your collection of elements to be removed/ignored and then discard it if required.

PriorityQueue.poll() calls compareTo()?

I am implementing a PriorityQueue in my program. For that I have also implemented compareTo(). The compareTo() is being called when I perform add(), which is expected. But it is also called when I perform poll(). I thought that the function of poll() is just to remove the head. Why does it need to call compareTo()?
The way a priority queue is implemented is often done with a heap. Part of poll()ing requires restructuring the heap which requires the heap to compare elements... hence compareTo(). This is just a guess though (i.e. I have not dug into the source code to verify my claim).
Here's a quick search on how priority queues are implemented using heaps if you are interested: http://pages.cs.wisc.edu/~vernon/cs367/notes/11.PRIORITY-Q.html#imp
Actually just for fun I'll describe how this works in a non-rigorous fashion. A heap is a tree satisfying the heap property: parents are always less than or equal to their children (min heap) or parents are always at least as large as their children (max heap). PriorityQueue is a minheap so poll() removes the root (make sure you understand this). But what happens to the tree if you remove the root? It's no longer a tree... So the way they fix this is by moving the root of the tree to a leaf node (where it can be plucked without destroying the tree/invalidating the heap property), and putting some other node in the root. But which node do you put into the root? Intuitively you might think they'd put the left or right child of the root (those are "almost as small as the original root"). You can do that, but you'd then need to fix the subtree rooted at that child (and the code is ugly). Instead they do the same thing (conceptually) but do it slightly differently to make the code nicer. In particular, they pluck a leaf node and stick it in the root (generally you swap the root and the leaf node to do both steps simultaneously). However, the heap property is no longer necessarily satisfied (the leaf node we stuck in the root could be quite large!). To fix this, you "bubble down" the new root until you get it to its correct location. Specifically, you compare the new root with the left and right children and keep swapping (if the parent is larger than at least one of the children) until the heap property is satisfied. Notice that this swapping will indeed lead to a valid heap (you can prove this, but it's intuitive).
Everything is in the JavaDoc (emphasis mine):
An unbounded priority queue based on a priority heap.
And in the source code of poll() you'll find:
public E poll() {
//...
if (s != 0)
siftDown(0, x);
return result;
}
Where siftDown() is:
/**
* Inserts item x at position k, maintaining heap invariant by
* demoting x down the tree repeatedly until it is less than or
* equal to its children or is a leaf.
* [...]
*/
private void siftDown(int k, E x) {
if (comparator != null)
siftDownUsingComparator(k, x);
else
siftDownComparable(k, x);
}
The JavaDoc comment on siftDown() is crucial, read it carefully. Basically the undeerlying implementation of PriorityQueue uses a heap which has to be restructured every time you modify it by polling.
Why are you bothered by this? compareTo() should be lightweight, idempotent and side-effect free method, like equals(). You shouldn't put any restrictions on it.

Updating Java PriorityQueue when its elements change priority

I'm trying to use a PriorityQueue to order objects using a Comparator.
This can be achieved easily, but the objects class variables (with which the comparator calculates priority) may change after the initial insertion. Most people have suggested the simple solution of removing the object, updating the values and reinserting it again, as this is when the priority queue's comparator is put into action.
Is there a better way other than just creating a wrapper class around the PriorityQueue to do this?
You have to remove and re-insert, as the queue works by putting new elements in the appropriate position when they are inserted. This is much faster than the alternative of finding the highest-priority element every time you pull out of the queue. The drawback is that you cannot change the priority after the element has been inserted. A TreeMap has the same limitation (as does a HashMap, which also breaks when the hashcode of its elements changes after insertion).
If you want to write a wrapper, you can move the comparison code from enqueue to dequeue. You would not need to sort at enqueue time anymore (because the order it creates would not be reliable anyway if you allow changes).
But this will perform worse, and you want to synchronize on the queue if you change any of the priorities. Since you need to add synchronization code when updating priorities, you might as well just dequeue and enqueue (you need the reference to the queue in both cases).
I don't know if there is a Java implementation, but if you're changing key values alot, you can use a Fibonnaci heap, which has O(1) amortized cost to decrease a key value of an entry in the heap, rather than O(log(n)) as in an ordinary heap.
One easy solution that you can implement is by just adding that element again into the priority queue. It will not change the way you extract the elements although it will consume more space but that also won't be too much to effect your running time.
To proof this let's consider dijkstra algorithm below
public int[] dijkstra() {
int distance[] = new int[this.vertices];
int previous[] = new int[this.vertices];
for (int i = 0; i < this.vertices; i++) {
distance[i] = Integer.MAX_VALUE;
previous[i] = -1;
}
distance[0] = 0;
previous[0] = 0;
PriorityQueue<Node> pQueue = new PriorityQueue<>(this.vertices, new NodeComparison());
addValues(pQueue, distance);
while (!pQueue.isEmpty()) {
Node n = pQueue.remove();
List<Edge> neighbours = adjacencyList.get(n.position);
for (Edge neighbour : neighbours) {
if (distance[neighbour.destination] > distance[n.position] + neighbour.weight) {
distance[neighbour.destination] = distance[n.position] + neighbour.weight;
previous[neighbour.destination] = n.position;
pQueue.add(new Node(neighbour.destination, distance[neighbour.destination]));
}
}
}
return previous;
}
Here our interest is in line
pQueue.add(new Node(neighbour.destination, distance[neighbour.destination]));
I am not changing priority of the particular node by removing it and adding again rather I am just adding new node with same value but different priority.
Now at the time of extracting I will always get this node first because I have implemented min heap here and the node with value greater than this (less priority) always be extracted afterwards and in this way all neighboring nodes will already be relaxed when less prior element will be extracted.
Without reimplementing the priority queue yourself (so by only using utils.PriorityQueue) you have essentially two main approaches:
1) Remove and put back
Remove element then put it back with new priority. This is explained in the answers above. Removing an element is O(n) so this approach is quite slow.
2) Use a Map and keep stale items in the queue
Keep a HashMap of item -> priority. The keys of the map are the items (without their priority) and the values of the map are the priorities.
Keep it in sync with the PriorityQueue (i.e. every time you add or remove an item from the Queue, update the Map accordingly).
Now when you need to change the priority of an item, simply add the same item to the queue with a different priority (and update the map of course). When you poll an item from the queue, check if its priority is the same than in your map. If not, then ditch it and poll again.
If you don't need to change the priorities too often, this second approach is faster. Your heap will be larger and you might need to poll more times, but you don't need to find your item.
The 'change priority' operation would be O(f(n)log n*), with f(n) the number of 'change priority' operation per item and n* the actual size of your heap (which is n*f(n)).
I believe that if f(n) is O(n/logn)(for example f(n) = O(sqrt(n)), this is faster than the first approach.
Note : in the explanation above, by priority I means all the variables that are used in your Comparator. Also your item need to implement equals and hashcode, and both methods shouldn't use the priority variables.
It depends a lot on whether you have direct control of when the values change.
If you know when the values change, you can either remove and reinsert (which in fact is fairly expensive, as removing requires a linear scan over the heap!).
Furthermore, you can use an UpdatableHeap structure (not in stock java though) for this situation. Essentially, that is a heap that tracks the position of elements in a hashmap. This way, when the priority of an element changes, it can repair the heap. Third, you can look for an Fibonacci heap which does the same.
Depending on your update rate, a linear scan / quicksort / QuickSelect each time might also work. In particular if you have much more updates than pulls, this is the way to go. QuickSelect is probably best if you have batches of update and then batches of pull opertions.
To trigger reheapify try this:
if(!priorityQueue.isEmpty()) {
priorityQueue.add(priorityQueue.remove());
}
Something I've tried and it works so far, is peeking to see if the reference to the object you're changing is the same as the head of the PriorityQueue, if it is, then you poll(), change then re-insert; else you can change without polling because when the head is polled, then the heap is heapified anyways.
DOWNSIDE: This changes the priority for Objects with the same Priority.
Is there a better way other than just creating a wrapper class around the PriorityQueue to do this?
It depends on the definition of "better" and the implementation of the wrapper.
If the implementation of the wrapper is to re-insert the value using the PriorityQueue's .remove(...) and .add(...) methods,
it's important to point out that .remove(...) runs in O(n) time.
Depending on the heap implementation,
updating the priority of a value can be done in O(log n) or even O(1) time,
therefore this wrapper suggestion may fall short of common expectations.
If you want to minimize your effort to implement,
as well as the risk of bugs of any custom solution,
then a wrapper that performs re-insert looks easy and safe.
If you want the implementation to be faster than O(n),
then you have some options:
Implement a heap yourself. The wikipedia entry describes multiple variants with their properties. This approach is likely to get your the best performance, at the same time the more code you write yourself, the greater the risk of bugs.
Implement a different kind of wrapper: handlee updating the priority by marking the entry as removed, and add a new entry with the revised priority.
This is relatively easy to do (less code), see below, though it has its own caveats.
I came across the second idea in Python's documentation,
and applied it to implement a reusable data structure in Java (see caveats at the bottom):
public class UpdatableHeap<T> {
private final PriorityQueue<Node<T>> pq = new PriorityQueue<>(Comparator.comparingInt(node -> node.priority));
private final Map<T, Node<T>> entries = new HashMap<>();
public void addOrUpdate(T value, int priority) {
if (entries.containsKey(value)) {
entries.remove(value).removed = true;
}
Node<T> node = new Node<>(value, priority);
entries.put(value, node);
pq.add(node);
}
public T pop() {
while (!pq.isEmpty()) {
Node<T> node = pq.poll();
if (!node.removed) {
entries.remove(node.value);
return node.value;
}
}
throw new IllegalStateException("pop from empty heap");
}
public boolean isEmpty() {
return entries.isEmpty();
}
private static class Node<T> {
private final T value;
private final int priority;
private boolean removed = false;
private Node(T value, int priority) {
this.value = value;
this.priority = priority;
}
}
}
Note some caveats:
Entries marked removed stay in memory until they are popped
This can be unacceptable in use cases with very frequent updates
The internal Node wrapped around the actual values is an extra memory overhead (constant per entry). There is also an internal Map, mapping all the values currently in the priority queue to their Node wrapper.
Since the values are used in a map, users must be aware of the usual cautions when using a map, and make sure to have appropriate equals and hashCode implementations.

Categories