In Java (but similarly in PHP) the ArrayDeque implementation always has its capacity as a power of 2:
http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/util/ArrayDeque.java#l126
For HashMap this choice is clear - to have a uniform element distribution based on a trimmed 32-bit hash. But Deque inserts/removes elements sequentially.
Also, ArrayList doesn't restrict its capacity to a power of two, just ensures it's at least the number of elements.
So, why does the Deque implementation require its capacity to be a power of 2?
I guess, for performance reasons. For example, let's look at implementation of addLast function:
public void addLast(E e) {
if (e == null)
throw new NullPointerException();
elements[tail] = e;
if ( (tail = (tail + 1) & (elements.length - 1)) == head)
doubleCapacity();
}
So, instead of tail = (tail + 1) % elements.length it is possible to write tail = (tail + 1) & (elements.length - 1) (& works faster, than %). Such constructions are used many times in ArrayDeque's source code.
Finally i found it!!!
The reason not just in performance and bits-mask operations (yes, they are faster, but not significantly). The real reason is to allow loop back the elements capacity if we use sequential adding-removing operations. In other words: reuse released cells after remove() operation.
Consider the following examples (initial capacity is 16):
Only add():
add 15 elements => head=0, tail=15
add more 5 elements => doubleCapacity() => head=0, tail=20, capacity=32
add()-remove()-add():
add 15 elements => head=0, tail=15
remove 10 elements => tail loops back to removed indexes => head=10, tail=15
add more 5 elements => the capacity remains 16, the elements[] array is not rebuilt or reallocated! => new elements are added into the place of the removed elements to the beginning of the array => head=10, tail=4 (looped back to the start of the array from 15->0->1->2->3->4). Note the values 16-19 are inserted to the indexes 0-3
So, in this case using power of two and concise bit operations makes much more sense. With such approach the operations like if ( (tail = (tail + 1) & (elements.length - 1)) == head) allow to assign and verify easily that the looped tail does not overlap with the head (yeah, the stupid snake where actually the tail bites the head :) )
The code snippet to play around:
ArrayDeque<String> q = new ArrayDeque<>(15); // capacity is 16
// add 15 elements
q.add("0"); q.add("1"); q.add("2"); q.add("3"); q.add("4");
q.add("5"); q.add("6"); q.add("7"); q.add("8"); q.add("9");
q.add("10"); q.add("11");q.add("12");q.add("13");q.add("14");
// remove 10 elements from the head => tail LOOPS BACK in the elements[]
q.poll();q.poll();q.poll();q.poll();q.poll();q.poll();q.poll();q.poll();q.poll();q.poll();
// add 5 elements => the elements[] is not reallocated!
q.add("15");q.add("16");q.add("17");q.add("18");q.add("19");
q.poll();
Powers of 2 lend themselves to certain masking operations. For example to get the lower order number of bits from an integer.
so if the size is 64, then 64-1 is 63 which is 111111 in binary.
This facilitates locating or placing elements within the deque.
Good question.
Looking in the code:
As you said, the capacity is always a power of two. Furthermore, the deque is never allowed to reach capacity.
public class ArrayDeque<E> extends AbstractCollection<E>
implements Deque<E>, Cloneable, Serializable
{
/**
* The array in which the elements of the deque are stored.
* The capacity of the deque is the length of this array, which is
* always a power of two. The array is never allowed to become
* full, except transiently within an addX method where it is
* resized (see doubleCapacity) immediately upon becoming full,
* thus avoiding head and tail wrapping around to equal each
* other....
The "power of two" convention simplifies "initial size":
/**
* Allocates empty array to hold the given number of elements.
*
* #param numElements the number of elements to hold
*/
private void allocateElements(int numElements) {
int initialCapacity = MIN_INITIAL_CAPACITY;
// Find the best power of two to hold elements.
// Tests "<=" because arrays aren't kept full.
if (numElements >= initialCapacity) {
initialCapacity = numElements;
initialCapacity |= (initialCapacity >>> 1);
initialCapacity |= (initialCapacity >>> 2);
initialCapacity |= (initialCapacity >>> 4);
initialCapacity |= (initialCapacity >>> 8);
initialCapacity |= (initialCapacity >>> 16);
initialCapacity++;
if (initialCapacity < 0) // Too many elements, must back off
initialCapacity >>>= 1;// Good luck allocating 2 ^ 30 elements
}
Finally, note the use of "mask":
/**
* Removes the last occurrence of the specified element in this
* deque (when traversing the deque from head to tail).
* If the deque does not contain the element, it is unchanged.
* More formally, removes the last element {#code e} such that
* {#code o.equals(e)} (if such an element exists).
* Returns {#code true} if this deque contained the specified element
* (or equivalently, if this deque changed as a result of the call).
*
* #param o element to be removed from this deque, if present
* #return {#code true} if the deque contained the specified element
*/
public boolean removeLastOccurrence(Object o) {
if (o == null)
return false;
int mask = elements.length - 1;
int i = (tail - 1) & mask;
Object x;
while ( (x = elements[i]) != null) {
if (o.equals(x)) {
delete(i);
return true;
}
i = (i - 1) & mask;
}
return false;
}
private boolean delete(int i) {
checkInvariants();
...
// Invariant: head <= i < tail mod circularity
if (front >= ((t - h) & mask))
throw new ConcurrentModificationException();
...
// Optimize for least element motion
if (front < back) {
if (h <= i) {
System.arraycopy(elements, h, elements, h + 1, front);
} else { // Wrap around
System.arraycopy(elements, 0, elements, 1, i);
elements[0] = elements[mask];
System.arraycopy(elements, h, elements, h + 1, mask - h);
}
elements[h] = null;
head = (h + 1) & mask;
Related
I'm having a hard time understanding what the mask integer is for (2nd line). I get that it regulates where values are placed in a double-ended queue, but I don't get how exactly. This is part of the code from a double-ended queue just to have some context.
public class DEQueue {
private int mask = (1 << 3) - 1;
private String[] es = new String[mask + 1];
private int head, tail;
public void addFirst(String e) {
es[head = (head - 1) & mask] = e;
if (tail == head) {
doubleCapacity();
}
}
public String pollFirst() {
String result = es[head];
es[head] = null;
if (tail != head) {
head = (head + 1) & mask;
}
return result;
}
public String peekFirst() {
return es[head];
}
public void addLast(String e) {
es[tail] = e;
tail = (tail + 1) & mask;
if (tail == head) {
doubleCapacity();
}
}
mask is used to wrap around the head and tail indices when new elements are added or removed. To be usable as bit mask, it is created by first shifting 1 a certain number of bits (here 3) and then performing - 1 to set all lower bits to 1.
In your example the initial value is (1 << 3) - 1, which is equivalent to binary 111. This represents an initial deque (double-ended queue) capacity of 8 (23) due to the 0 being used as index as well.
Now let's imagine for an empty deque addFirst(...) is called:
head is initially 0
head - 1 is -1, due to being in two's complement this is equivalent to binary 1...111 (all bits are 1)
Applying & mask works as bit mask and only selects the bits which have the value 1 in mask, that is the lowest three bits, here: 1...111 & 111. This wraps the -1 from the previous step to a 7 (binary 111).
In the end that means the addFirst(...) call caused head to wrap around and place the element at es[7], the last position in the array.
Now let's consider the similar situation of calling addLast(...) when tail already points to the last element of the array, assuming this index 7 here again. Note that in your implementation tail seems to point to the next free index at the end of the deque.
tail + 1 is 8, the binary representation is 1000
& mask again works as bit mask, 1000 & 111. It again only selects the lowest three bits, which are all 0 in this case. This effectively wraps the 8 to a 0, the first index in the array.
(The situation is the same for calls to pollFirst())
For all other calls to addFirst(...) and addLast(...) applying the bit mask & mask has no effect and leaves the indices unchanged because they are in range [0, array.length).
I found a nice Java implementation of bit twiddling techniques here, for which I think many are based on that document. For in particular the toBitSet and nextPermutation, I wonder: Is it possible to make them support datatypes that can go beyond 64 bits (long)? And how?
Would it be possible to define such methods for Java's BigInt for example?
Or would iterating over the k-bit binary strings (of length n) (where k = #bits set to 1, n = #bits in total), as nextPermutation is doing, necessitate a (slower) String implementation (i.e. represent the binary numbers as n-character strings)?
Below is the source of the aforementioned operations. I'm sorry for not being able to add more information, I regret to admit that I have little knowledge of bitwise operations in general.
Any suggestions would be much appreciated.
toBitSet:
/**
* Converts {#code value} into a {#link BitSet} of size {#code size}.
*/
public static final BitSet toBitSet(final int size, final long value) {
BitSet bits = new BitSet(size);
int idx = 0;
long tmp = value;
while (tmp != 0L) {
if (tmp % 2L != 0L) {
bits.set(idx);
}
++idx;
tmp = tmp >>> 1;
}
return bits;
}
nextPermutation:
/**
* Compute the lexicographically next bit permutation.
*
* Suppose we have a pattern of N bits set to 1 in an integer and we want the next permutation of
* N 1 bits in a lexicographical sense. For example, if N is 3 and the bit pattern is 00010011,
* the next patterns would be 00010101, 00010110, 00011001,00011010, 00011100, 00100011, and so
* forth.
*/
public static final long nextPermutation(long val) {
long tmp = val | (val - 1);
return (tmp + 1) | (((-tmp & -~tmp) - 1) >> (Long.numberOfTrailingZeros(val) + 1));
}
Consider the snippet extracted from the list iterator of the sub list inside an arraylist
#SuppressWarnings("unchecked")
public E next() {
checkForComodification();
int i = cursor;
if (i >= SubList.this.size)
throw new NoSuchElementException();
Object[] elementData = ArrayList.this.elementData;
if (offset + i >= elementData.length)
throw new ConcurrentModificationException();
cursor = i + 1;
return (E) elementData[offset + (lastRet = i)];
}
cursor initially is set to 0. Imagine the arraylist and its sub list as below-
el original sublist
0 a[0]
10 a[1]
20 a[2] s[0]
30 a[3] s[1]
40 a[4] s[2]
50 a[5] s[3]
60 a[6] s[4]
70 a[7]
80 a[8]
90 a[9]
I see that the condition used in the above next method
if (offset + i >= elementData.length)
throw new ConcurrentModificationException();
will never hold true.
Because
1) If you perform an add operation to the sublist, it will internally call the add method on the main arraylist and increments its size by 1. After performing the adding operation on the main list, it will increments the sub list size. if the expansion is needed on the main list, it(internally backed up by an array ) will grow to the necessary size.
offset is the diff b/w the starting positions of the arraylist and sublist.
Since the condition for i in the above snippet is already checked (i >= SubList.this.size), the condition offset + i >= elementData.length is never going to be true.
2) Similarly for the remove operation. The backing array never shrinks. The removal is tracked by reducing the size by 1. And backing array length and size are two different things.
size is not equal to backing array length.
/**
* The array buffer into which the elements of the ArrayList are stored.
* The capacity of the ArrayList is the length of this array buffer. Any
* empty ArrayList with elementData == DEFAULTCAPACITY_EMPTY_ELEMENTDATA
* will be expanded to DEFAULT_CAPACITY when the first element is added.
*/
transient Object[] elementData; // non-private to simplify nested class access
/**
* The size of the ArrayList (the number of elements it contains).
*
* #serial
*/
private int size;
What am I missing. Please suggest.
The clue is ConcurrentModificationException rather than NoSuchElementException.
If another thread removes an element during execution of this code, the condition may be true.
I had an interview and there was the following question:
Find unique numbers from sorted array in less than O(n) time.
Ex: 1 1 1 5 5 5 9 10 10
Output: 1 5 9 10
I gave the solution but that was of O(n).
Edit: Sorted array size is approx 20 billion and unique numbers are approx 1000.
Divide and conquer:
look at the first and last element of a sorted sequence (the initial sequence is data[0]..data[data.length-1]).
If both are equal, the only element in the sequence is the first (no matter how long the sequence is).
If the are different, divide the sequence and repeat for each subsequence.
Solves in O(log(n)) in the average case, and O(n) only in the worst case (when each element is different).
Java code:
public static List<Integer> findUniqueNumbers(int[] data) {
List<Integer> result = new LinkedList<Integer>();
findUniqueNumbers(data, 0, data.length - 1, result, false);
return result;
}
private static void findUniqueNumbers(int[] data, int i1, int i2, List<Integer> result, boolean skipFirst) {
int a = data[i1];
int b = data[i2];
// homogenous sequence a...a
if (a == b) {
if (!skipFirst) {
result.add(a);
}
}
else {
//divide & conquer
int i3 = (i1 + i2) / 2;
findUniqueNumbers(data, i1, i3, result, skipFirst);
findUniqueNumbers(data, i3 + 1, i2, result, data[i3] == data[i3 + 1]);
}
}
I don't think it can be done in less than O(n). Take the case where the array contains 1 2 3 4 5: in order to get the correct output, each element of the array would have to be looked at, hence O(n).
If your sorted array of size n has m distinct elements, you can do O(mlogn).
Note that this is going to efficient when m << n (eg m=2 and n=100)
Algorithm:
Initialization: Current element y = first element x[0]
Step 1: Do a binary search for the last occurrence of y in x (can be done in O(log(n)) time. Let it's index be i
Step 2: y = x[i+1] and go to step 1
Edit: In cases where m = O(n) this algorithm is going to work badly. To alleviate it you can run it in parallel with regular O(n) algorithm. The meta algorithm consists of my algorithm and O(n) algorithm running in parallel. The meta algorithm stops when either of these two algorithms complete.
Since the data consists of integers, there are a finite number of unique values that can occur between any two values. So, start with looking at the first and last value in the array. If a[length-1] - a[0] < length - 1, there will be some repeating values. Put a[0] and a[length-1] into some constant-access-time container like a hash set. If the two values are equal, you konow that there is only one unique value in the array and you are done. You know that the array is sorted. So, if the two values are different, you can look at the middle element now. If the middle element is already in the set of values, you know that you can skip the whole left part of the array and only analyze the right part recursively. Otherwise, analyze both left and right part recursively.
Depending on the data in the array you will be able to get the set of all unique values in a different number of operations. You get them in constant time O(1) if all the values are the same since you will know it after only checking the first and last element. If there are "relatively few" unique values, your complexity will be close to O(log N) because after each partition you will "quite often" be able to throw away at least one half of the analyzed sub-array. If the values are all unique and a[length-1] - a[0] = length - 1, you can also "define" the set in constant time because they have to be consecutive numbers from a[0] to a[length-1]. However, in order to actually list them, you will have to output each number, and there are N of them.
Perhaps someone can provide a more formal analysis, but my estimate is that this algorithm is roughly linear in the number of unique values rather than the size of the array. This means that if there are few unique values, you can get them in few operations even for a huge array (e.g. in constant time regardless of array size if there is only one unique value). Since the number of unique values is no grater than the size of the array, I claim that this makes this algorithm "better than O(N)" (or, strictly: "not worse than O(N) and better in many cases").
import java.util.*;
/**
* remove duplicate in a sorted array in average O(log(n)), worst O(n)
* #author XXX
*/
public class UniqueValue {
public static void main(String[] args) {
int[] test = {-1, -1, -1, -1, 0, 0, 0, 0,2,3,4,5,5,6,7,8};
UniqueValue u = new UniqueValue();
System.out.println(u.getUniqueValues(test, 0, test.length - 1));
}
// i must be start index, j must be end index
public List<Integer> getUniqueValues(int[] array, int i, int j) {
if (array == null || array.length == 0) {
return new ArrayList<Integer>();
}
List<Integer> result = new ArrayList<>();
if (array[i] == array[j]) {
result.add(array[i]);
} else {
int mid = (i + j) / 2;
result.addAll(getUniqueValues(array, i, mid));
// avoid duplicate divide
while (mid < j && array[mid] == array[++mid]);
if (array[(i + j) / 2] != array[mid]) {
result.addAll(getUniqueValues(array, mid, j));
}
}
return result;
}
}
If you are provided the head of a linked list, and are asked to reverse every k sequence of nodes, how might this be done in Java? e.g., a->b->c->d->e->f->g->h with k = 3 would be c->b->a->f->e->d->h->g->f
Any general help or even pseudocode would be greatly appreciated! Thanks!
If k is expected to be reasonably small, I would just go for the simplest thing: ignore the fact that it's a linked list at all, and treat each subsequence as just an array-type thing of things to be reversed.
So, if your linked list's node class is a Node<T>, create a Node<?>[] of size k. For each segment, load k Nodes into the array list, then just reverse their elements with a simple for loop. In pseudocode:
// reverse the elements within the k nodes
for i from 0 to k/2:
nodeI = segment[i]
nodeE = segment[segment.length-i-1]
tmp = nodeI.elem
nodeI.elem = nodeE.elem
nodeE.elem = tmp
Pros: very simple, O(N) performance, takes advantage of an easily recognizable reversing algorithm.
Cons: requires a k-sized array (just once, since you can reuse it per segment)
Also note that this means that each Node doesn't move in the list, only the objects the Node holds. This means that each Node will end up holding a different item than it held before. This could be fine or not, depending on your needs.
This is pretty high-level, but I think it'll give some guidance.
I'd have a helper method like void swap3(Node first, Node last) that take three elements at an arbitrary position of the list and reverses them. This shouldn't be hard, and could could be done recursively (swap the outer elements, recurse on the inner elements until the size of the list is 0 or 1). Now that I think of it, you could generalize this into swapK() easily if you're using recursion.
Once that is done, then you can just walk along your linked list and call swapK() every k nodes. If the size of the list isn't divisble by k, you could either just not swap that last bit, or reverse the last length%k nodes using your swapping technique.
TIME O(n); SPACE O(1)
A usual requirement of list reversal is that you do it in O(n) time and O(1) space. This eliminates recursion or stack or temporary array (what if K==n?), etc.
Hence the challenge here is to modify an in-place reversal algorithm to account for the K factor. Instead of K I use dist for distance.
Here is a simple in-place reversal algorithm: Use three pointers to walk the list in place: b to point to the head of the new list; c to point to the moving head of the unprocessed list; a to facilitate swapping between b and c.
A->B->C->D->E->F->G->H->I->J->L //original
A<-B<-C<-D E->F->G->H->I->J->L //during processing
^ ^
| |
b c
`a` is the variable that allow us to move `b` and `c` without losing either of
the lists.
Node simpleReverse(Node n){//n is head
if(null == n || null == n.next)
return n;
Node a=n, b=a.next, c=b.next;
a.next=null; b.next=a;
while(null != c){
a=c;
c=c.next;
a.next=b;
b=a;
}
return b;
}
To convert the simpleReverse algorithm to a chunkReverse algorithm, do following:
1] After reversing the first chunk, set head to b; head is the permanent head of the resulting list.
2] for all the other chunks, set tail.next to b; recall that b is the head of the chunk just processed.
some other details:
3] If the list has one or fewer nodes or the dist is 1 or less, then return the list without processing.
4] use a counter cnt to track when dist consecutive nodes have been reversed.
5] use variable tail to track the tail of the chunk just processed and tmp to track the tail of the chunk being processed.
6] notice that before a chunk is processed, it's head, which is bound to become its tail, is the first node you encounter: so, set it to tmp, which is a temporary tail.
public Node reverse(Node n, int dist) {
if(dist<=1 || null == n || null == n.right)
return n;
Node tail=n, head=null, tmp=null;
while(true) {
Node a=n, b=a.right; n=b.right;
a.right=null; b.right=a;
int cnt=2;
while(null != n && cnt < dist) {
a=n; n=n.right; a.right=b; b=a;
cnt++;
}
if(null == head) head = b;
else {
tail.right=b;tail=tmp;
}
tmp=n;
if(null == n) return head;
if(null == n.right) {
tail.right=n;
return head;
}
}//true
}
E.g. by Common Lisp
(defun rev-k (k sq)
(if (<= (length sq) k)
(reverse sq)
(concatenate 'list (reverse (subseq sq 0 k)) (rev-k k (subseq sq k)))))
other way
E.g. by F# use Stack
open System.Collections.Generic
let rev_k k (list:'T list) =
seq {
let stack = new Stack<'T>()
for x in list do
stack.Push(x)
if stack.Count = k then
while stack.Count > 0 do
yield stack.Pop()
while stack.Count > 0 do
yield stack.Pop()
}
|> Seq.toList
Use a stack and recursively remove k items from the list, push them to the stack then pop them and add them in place. Not sure if it's the best solution, but stacks offer a proper way of inverting things. Notice that this also works if instead of a list you had a queue.
Simply dequeue k items, push them to the stack, pop them from the stack and enqueue them :)
This implementation uses ListIterator class:
LinkedList<T> list;
//Inside the method after the method's parameters check
ListIterator<T> it = (ListIterator<T>) list.iterator();
ListIterator<T> reverseIt = (ListIterator<T>) list.listIterator(k);
for(int i = 0; i< (int) k/2; i++ )
{
T element = it.next();
it.set(reverseIt.previous());
reverseIt.set(element);
}