I'm looking at algorithms on coursea.org by Robert Sedgewick. He mentions that keys should be immutable for a priority queue? Why?
Here is simple Example
public static void main(String[] args)
{
Comparator<AtomicInteger> comparator = new AtomicIntegerComparater();
PriorityQueue<AtomicInteger> queue =
new PriorityQueue<AtomicInteger>(10, comparator);
AtomicInteger lessInteger = new AtomicInteger(10);
AtomicInteger middleInteger = new AtomicInteger(20);
AtomicInteger maxInteger = new AtomicInteger(30);
queue.add(lessInteger);
queue.add(middleInteger);
queue.add(maxInteger);
while (queue.size() != 0)
{
System.out.println(queue.remove());
}
queue.add(lessInteger);
queue.add(middleInteger);
queue.add(maxInteger);
lessInteger.addAndGet(30);
while (queue.size() != 0)
{
System.out.println(queue.remove());
}
}
}
class AtomicIntegerComparater implements Comparator<AtomicInteger>
{
#Override
public int compare(AtomicInteger x, AtomicInteger y)
{
if (x.get() < y.get())
{
return -1;
}
if (x.get() > y.get())
{
return 1;
}
return 0;
}
}
You will get an output like
10
20
30
40
20
30
Note in the second removal , it removes 40 first. but the expectation is it should removed last. Since while it added it has 10 and that is considered as first element.
How ever, if you add another element to the same queue, it is re-ordering properly.
queue.add(lessInteger);
queue.add(middleInteger);
queue.add(maxInteger);
lessInteger.addAndGet(30);
queue.add(new AtomicInteger(5));
while (queue.size() != 0)
{
System.out.println(queue.remove());
}
would result as
5
20
30
40
Check siftUpUsingComparator method of PriortyQueue .
private void siftUpUsingComparator(int k, E x) {
while (k > 0) {
int parent = (k - 1) >>> 1;
Object e = queue[parent];
if (comparator.compare(x, (E) e) >= 0)
break;
queue[k] = e;
k = parent;
}
queue[k] = x;
}
is it applicable to other Collection ?
Well it depends upon the that Collection , Implementation.
For example : TreeSet fall under same category. it just keeps / use the Comparator while insert not iterate.
TreeSet<AtomicInteger> treeSets = new TreeSet<AtomicInteger>(comparator);
lessInteger.set(10);
treeSets.add(middleInteger);
treeSets.add(lessInteger);
treeSets.add(maxInteger);
lessInteger.addAndGet(30);
for (Iterator<AtomicInteger> iterator = treeSets.iterator(); iterator.hasNext();) {
AtomicInteger atomicInteger = iterator.next();
System.out.println(atomicInteger);
}
Would result in
40
20
30
Which is not excepted.
The reason why keys (entries) of a PriorityQueue should be immutable is that the PriorityQueue can not detect changes of these keys. For example, when you insert a key with a certain priority, it will be placed at a certain position in the queue. (Actually, in the backing implementation, it is more like a "tree", but this does not matter here). When you now modify this object, then its priority may change. But it will not change its position in the queue, because the queue does not know that the object was modified. The placement of this object in the queue may then simply be wrong, and the queue will become inconsistent (that is, return objects in the wrong order).
Note that the objects do not strictly have to be completely immutable. The important point is that there may be no modification of the objects that affects their priority. It's perfectly feasible to modify a field of the object that is not involved in the computation of the priority. But care has to be taken, because whether a change affects the priority may or may not be specified explicitly in the class of the respective entries.
Here is a simple example that shows how it breaks - the first queue prints numbers in order as expected but the second doesn't because one of the numbers has been mutated after having been added to the queue.
public static void main(String[] args) throws Exception {
PriorityQueue<Integer> ok = new PriorityQueue<>(Arrays.asList(1, 2, 3));
Integer i = null;
while ((i = ok.poll()) != null) System.out.println(i); //1,2,3
PriorityQueue<AtomicInteger> notOk = new PriorityQueue<>(Comparator.comparing(AtomicInteger::intValue));
AtomicInteger one = new AtomicInteger(1);
notOk.add(one);
notOk.add(new AtomicInteger(2));
notOk.add(new AtomicInteger(3));
one.set(7);
AtomicInteger ai = null;
while ((ai = notOk.poll()) != null) System.out.println(ai); //7,2,3
}
Related
I have 2 ArrayList's. ArrayList A has 8.1k elements and ArrayList B has 81k elements.
I need to iterate through B, search for that particular item in A then change a field in the matched element in list B.
Here's my code:
private void mapAtoB(List<A> aList, ListIterator<B> it) {
AtomicInteger i = new AtomicInteger(-1);
while(it.hasNext()) {
System.out.print(i.incrementAndGet() + ", ");
B b = it.next();
aList.stream().filter(a -> b.equalsB(a)).forEach(a -> {
b.setId(String.valueOf(a.getRedirectId()));
it.set(b);
});
}
System.out.println();
}
public class B {
public boolean equalsB(A a) {
if (a == null) return false;
if (this.getFullURL().contains(a.getFirstName())) return true;
return false;
}
}
But this is taking forever. To finish this method it takes close to 15 minutes. Is there any way to optimize any of this? 15 min run time is way too much.
I'll be happy to see a good and thorough solution, meanwhile I can propose two ideas (or maybe two reincarnations of one).
The first one is to speed up searching of all objects of type A in one object of type B. For that, Rabin-Karp algorithm seems applicable and simple enough to quickly implement, and Aho-Corasick harder but will probably give better results, not sure how much better.
The other option is to limit the number of objects of type B which should be fully processed for each object of A, for that you could e.g. build an inverse N-gram index: for each fullUrl you take all its substrings of length N ("N-grams"), and you build a map from each such N-gram to a set of B's that have such N-gram in their fullUrl. When searching for an object A, you take all of its N-grams, find a set of B's for each such N-gram and intersect all these sets, the intersection will contain all B's that you should fully process. I implemented this approach quickly, for the sizes you specified it gives a 6-7 time speedup for N=4; as N grows, search becomes faster, but building the index slows down (so if you can reuse it you are probably better off choosing a bigger N). This index takes about 200 Mb for the sizes you specified, so this approach will only scale this far with the growth of the collection of B's. Assuming that all strings are longer than NGRAM_LENGTH, here's the quick and dirty code for building the index using Guava's SetMultimap, HashMultimap:
SetMultimap<String, B> idx = HashMultimap.create();
for (B b : bList) {
for (int i = 0; i < b.getFullURL().length() - NGRAM_LENGTH + 1; i++) {
idx.put(b.getFullURL().substring(i, i + NGRAM_LENGTH), b);
}
}
And for the search:
private void mapAtoB(List<A> aList, SetMultimap<String, B> mmap) {
for (A a : aList) {
Collection<B> possible = null;
for (int i = 0; i < a.getFirstName().length() - NGRAM_LENGTH + 1; i++) {
String ngram = a.getFirstName().substring(i, i + NGRAM_LENGTH);
Set<B> forNgram = mmap.get(ngram);
if (possible == null) {
possible = new ArrayList<>(forNgram);
} else {
possible.retainAll(forNgram);
}
if (possible.size() < 20) { // it's ok to scan through 20
break;
}
}
for (B b : possible) {
if (b.equalsB(a)) {
b.setId(a.getRedirectId());
}
}
}
}
A possible direction for optimization would be to use hashes instead of full N-grams thus reducing the memory footprint and necessity for N-gram key comparisons.
Why is Arraylist's size not right when multiple threads add elements into it?
threadCount = 100;
List<Object> list = new ArrayList<>();
for (int i = 0; i < threadCount; i++) {
Thread thread = new Thread(new MyThread(list, countDownLatch));
thread.start();
}
class MyThread implements Runnable {
// ......
#Override
public void run() {
for (int i = 0; i < 100; i++) {
list.add(new Object());
}
}
}
When this program is done, the size of the list should be 10000. Actually, the size may be 9950, 9965 or some other numbers. Why?
I know that why this program may raise IndexOutofBoundsException and why there are some nulls in it, but I just do not understand why the size is wrong.
I just do not understand why the size is wrong?
As ArrayList is not thread-safe so two threads may add at same index. So one thread overwrites.
While you are using arraylist.clear() at any position ,in main ui thread ,after that if array list operation are done on background service & finally if you have array list in main ui thread after performing background operation, array list becomes zero size of list.
//don't use it in main ui thread
activity?.runOnUiThread {
// it comes myarraylist size is 0 at end
myarraylist.clear()
}
....
//some background opertion add to arraylist
....
activity?.runOnUiThread {
//if you set adapter it may get 0 size
}
arraylist is not thread-safe,so if any multi-thread operation on one arraylist object, the result is undefined,which means anything could happen due to jvm, including but not limited to wrong size, exception, memory leak, as expected
It Simple,
You are preforming an unsafe operation on an object...
So you can't predict the stability...
If you run enough times there might be one instance when the list.size() will return 10000.
Why Its not returning always.??. Answer is , because its not thread safe. Now consider the code of add in arraylist
public boolean add( E element) {
ensureCapacity(size+1); // Increments modCount!!
elementData[index] = element;
return true;
}
Here suppose when thread 10 is inside "ensureCapacity" and it goes for sleep then other thread (say thread 11) will come and they( thread 10 and 11) will end up inserting 2 elements at one place. This is happening here
you should refer to the source code in ArrayList.
#Override public boolean add(E object) {
Object[] a = array;
int s = size;
if (s == a.length) {
Object[] newArray = new Object[s +
(s < (MIN_CAPACITY_INCREMENT / 2) ?
MIN_CAPACITY_INCREMENT : s >> 1)];
System.arraycopy(a, 0, newArray, 0, s);
array = a = newArray;
}
a[s] = object;
size = s + 1;
modCount++;
return true;
}
ArrayList uses System.arraycopy(a, 0, newArray, 0, s) to insert element. If multi threads are running, the field array that contains the elements is not the original one, and the size is less than expected.
I am trying to converting a for loop to functional code. I need to look ahead one value and also look behind one value. Is it possible using streams?
The following code is to convert the Roman text to numeric value.
Not sure if reduce method with two/three arguments can help here.
int previousCharValue = 0;
int total = 0;
for (int i = 0; i < input.length(); i++) {
char current = input.charAt(i);
RomanNumeral romanNum = RomanNumeral.valueOf(Character.toString(current));
if (previousCharValue > 0) {
total += (romanNum.getNumericValue() - previousCharValue);
previousCharValue = 0;
} else {
if (i < input.length() - 1) {
char next = input.charAt(i + 1);
RomanNumeral nextNum = RomanNumeral.valueOf(Character.toString(next));
if (romanNum.getNumericValue() < nextNum.getNumericValue()) {
previousCharValue = romanNum.getNumericValue();
}
}
if (previousCharValue == 0) {
total += romanNum.getNumericValue();
}
}
}
No, this is not possible using streams, at least not easily. The stream API abstracts away from the order in which the elements are processed: the stream might be processed in parallel, or in reverse order. So "the next element" and "previous element" do not exist in the stream abstraction.
You should use the API best suited for the job: stream are excellent if you need to apply some operation to all elements of a collection and you are not interested in the order. If you need to process the elements in a certain order, you have to use iterators or maybe access the list elements through indices.
I haven't see such use case with streams, so I can not say if it is possible or not. But when I need to use streams with index, I choose IntStream#range(0, table.length), and then in lambdas I get the value from this table/list.
For example
int[] arr = {1,2,3,4};
int result = IntStream.range(0, arr.length)
.map(idx->idx>0 ? arr[idx] + arr[idx-1]:arr[idx])
.sum();
By the nature of the stream you don't know the next element unless you read it. Therefore directly obtaining the next element is not possible when processing current element. However since you are reading current element you obiously know what was read before, so to achieve such goal as "accesing previous element" and "accessing next element", you can rely on the history of elements which were already processed.
Following two solutions are possible for your problem:
Get access to previously read elements. This way you know the current element and defined number of previously read elements
Assume that at the moment of stream processing you read next element and that current element was read in previous iteration. In other words you consider previously read element as "current" and currently processed element as next (see below).
Solution 1 - implemenation
First we need a data structure which will allow keeping track of data flowing through the stream. Good choice could be an instance of Queue because queues by their nature allows data flowing through them. We only need to bound the queue to the number of last elements we want to know (that would be 3 elements for your use case). For this we create a "bounded" queue keeping history like this:
public class StreamHistory<T> {
private final int numberOfElementsToRemember;
private LinkedList<T> queue = new LinkedList<T>(); // queue will store at most numberOfElementsToRemember
public StreamHistory(int numberOfElementsToRemember) {
this.numberOfElementsToRemember = numberOfElementsToRemember;
}
public StreamHistory save(T curElem) {
if (queue.size() == numberOfElementsToRemember) {
queue.pollLast(); // remove last to keep only requested number of elements
}
queue.offerFirst(curElem);
return this;
}
public LinkedList<T> getLastElements() {
return queue; // or return immutable copy or immutable view on the queue. Depends on what you want.
}
}
The generic parameter T is the type of actual elements of the stream. Method save returns reference to instance of current StreamHistory for better integration with java Stream api (see below) and it is not really required.
Now the only thing to do is to convert the stream of elements to the stream of instances of StreamHistory (where each next element of the stream will hold last n instances of actual objects going through the stream).
public class StreamHistoryTest {
public static void main(String[] args) {
Stream<Character> charactersStream = IntStream.range(97, 123).mapToObj(code -> (char) code); // original stream
StreamHistory<Character> streamHistory = new StreamHistory<>(3); // instance of StreamHistory which will store last 3 elements
charactersStream.map(character -> streamHistory.save(character)).forEach(history -> {
history.getLastElements().forEach(System.out::print);
System.out.println();
});
}
}
In above example we first create a stream of all letters in alphabet. Than we create instance of StreamHistory which will be pushed to each iteration of map() call on original stream. Via call to map() we convert to stream containing references to our instance of StreamHistory.
Note that each time the data flows through original stream the call to streamHistory.save(character) updates the content of the streamHistory object to reflect current state of the stream.
Finally in each iteration we print last 3 saved characters. The output of this method is following:
a
ba
cba
dcb
edc
fed
gfe
hgf
ihg
jih
kji
lkj
mlk
nml
onm
pon
qpo
rqp
srq
tsr
uts
vut
wvu
xwv
yxw
zyx
Solution 2 - implementation
While solution 1 will in most cases do the job and is fairly easy to follow, there are use cases were the possibility to inspect next element and previous is really convenient. In such scenario we are only interested in three element tuples (pevious, current, next) and having only one element does not matter (for simple example consider following riddle: "given a stream of numbers return a tupple of three subsequent numbers which gives the highest sum"). To solve such use cases we might want to have more convenient api than StreamHistory class.
For this scenario we introduce a new variation of StreamHistory class (which we call StreamNeighbours). The class will allow to inspect the previous and the next element directly. Processing will be done in time "T-1" (that is: the currently processed original element is considered as next element, and previously processed original element is considered to be current element). This way we, in some sense, inspect one element ahead.
The modified class is following:
public class StreamNeighbours<T> {
private LinkedList<T> queue = new LinkedList(); // queue will store one element before current and one after
private boolean threeElementsRead; // at least three items were added - only if we have three items we can inspect "next" and "previous" element
/**
* Allows to handle situation when only one element was read, so technically this instance of StreamNeighbours is not
* yet ready to return next element
*/
public boolean isFirst() {
return queue.size() == 1;
}
/**
* Allows to read first element in case less than tree elements were read, so technically this instance of StreamNeighbours is
* not yet ready to return both next and previous element
* #return
*/
public T getFirst() {
if (isFirst()) {
return queue.getFirst();
} else if (isSecond()) {
return queue.get(1);
} else {
throw new IllegalStateException("Call to getFirst() only possible when one or two elements were added. Call to getCurrent() instead. To inspect the number of elements call to isFirst() or isSecond().");
}
}
/**
* Allows to handle situation when only two element were read, so technically this instance of StreamNeighbours is not
* yet ready to return next element (because we always need 3 elements to have previos and next element)
*/
public boolean isSecond() {
return queue.size() == 2;
}
public T getSecond() {
if (!isSecond()) {
throw new IllegalStateException("Call to getSecond() only possible when one two elements were added. Call to getFirst() or getCurrent() instead.");
}
return queue.getFirst();
}
/**
* Allows to check that this instance of StreamNeighbours is ready to return both next and previous element.
* #return
*/
public boolean areThreeElementsRead() {
return threeElementsRead;
}
public StreamNeighbours<T> addNext(T nextElem) {
if (queue.size() == 3) {
queue.pollLast(); // remove last to keep only three
}
queue.offerFirst(nextElem);
if (!areThreeElementsRead() && queue.size() == 3) {
threeElementsRead = true;
}
return this;
}
public T getCurrent() {
ensureReadyForReading();
return queue.get(1); // current element is always in the middle when three elements were read
}
public T getPrevious() {
if (!isFirst()) {
return queue.getLast();
} else {
throw new IllegalStateException("Unable to read previous element of first element. Call to isFirst() to know if it first element or not.");
}
}
public T getNext() {
ensureReadyForReading();
return queue.getFirst();
}
private void ensureReadyForReading() {
if (!areThreeElementsRead()) {
throw new IllegalStateException("Queue is not threeElementsRead for reading (less than two elements were added). Call to areThreeElementsRead() to know if it's ok to call to getCurrent()");
}
}
}
Now, assuming that three elements were already read, we can directly access current element (which is the element going through the stream at time T-1), we can access next element (which is the element going at the moment through the stream) and previous (which is the element going through the stream at time T-2):
public class StreamTest {
public static void main(String[] args) {
Stream<Character> charactersStream = IntStream.range(97, 123).mapToObj(code -> (char) code);
StreamNeighbours<Character> streamNeighbours = new StreamNeighbours<Character>();
charactersStream.map(character -> streamNeighbours.addNext(character)).forEach(neighbours -> {
// NOTE: if you want to have access the values before instance of StreamNeighbours is ready to serve three elements
// you can use belows methods like isFirst() -> getFirst(), isSecond() -> getSecond()
//
// if (curNeighbours.isFirst()) {
// Character currentChar = curNeighbours.getFirst();
// System.out.println("???" + " " + currentChar + " " + "???");
// } else if (curNeighbours.isSecond()) {
// Character currentChar = curNeighbours.getSecond();
// System.out.println(String.valueOf(curNeighbours.getFirst()) + " " + currentChar + " " + "???");
//
// }
//
// OTHERWISE: you are only interested in tupples consisting of three elements, so three elements needed to be read
if (neighbours.areThreeElementsRead()) {
System.out.println(neighbours.getPrevious() + " " + neighbours.getCurrent() + " " + neighbours.getNext());
}
});
}
}
The output of this is following:
a b c
b c d
c d e
d e f
e f g
f g h
g h i
h i j
i j k
j k l
k l m
l m n
m n o
n o p
o p q
p q r
q r s
r s t
s t u
t u v
u v w
v w x
w x y
x y z
By StreamNeighbours class it is easier to track the previous/next element (because we have method with appropriate names), while in StreamHistory class this is more cumbersome since we need to manually "reverse" the order of the queue to achieve this.
As others stated, it's not feasable, to get next elements from within an iterated Stream.
If IntStream is used as a for loop surrogate, which merely acts as an index iteration provider, it's possible use its range iteration index just like with for; one needs to provide a means of skipping the next element on the next iteration, though, e. g. by means of an external skip var, like this:
AtomicBoolean skip = new AtomicBoolean();
List<String> patterns = IntStream.range(0, ptrnStr.length())
.mapToObj(i -> {
if (skip.get()) {
skip.set(false);
return "";
}
char c = ptrnStr.charAt(i);
if (c == '\\') {
skip.set(true);
return String.valueOf(new char[] { c, ptrnStr.charAt(++i) });
}
return String.valueOf(c);
})
It's not pretty, but it works.
On the other hand, with for, it can be as simple as:
List<String> patterns = new ArrayList();
for (char i=0, c=0; i < ptrnStr.length(); i++) {
c = ptrnStr.charAt(i);
patternList.add(
c != '\\'
? String.valueOf(c)
: String.valueOf(new char[] { c, ptrnStr.charAt(++i) })
);
}
EDIT:
Condensed code and added for example.
I have a following code snippet (The code is in Java, but I have tried to reduce as much clutter as possible):
class State {
public synchronized read() {
}
public synchronized write(ResourceManager rm) {
rm.request();
}
public synchronized returnResource() {
}
}
State st1 = new State();
State st2 = new State();
State st3 = new State();
class ResourceManager {
public syncronized request() {
st2 = findIdleState();
return st2.returnResource();
}
}
ResourceManager globalRM = new ResourceManager();
Thread1()
{
st1.write(globalRM);
}
Thread2()
{
st2.write(globalRM);
}
Thread3()
{
st1.read();
}
This code snippet has the possibility of entering a deadlock with the following sequence of calls:
Thread1: st1.write()
Thread1: st1.write() invokes globalRM.request()
Thread2: st2.write()
Thread1: globalRM.request() tries to invoke st2.returnResource(), but gets blocked because Thread2 is holding a lock on st2.
Thread2: st2.write() tries to invoke globalRM.request(), but gets blocked because globalRM's lock is with Thread1
Thread3: st2.read(), gets blocked.
How do I solve such a deadlock? I thought about it for a while to see there is some sort of ordered locks approach I can use to acquire the locks, but I cannot think of such a solution. The problem is that, the resource manager is global, while states are specific to each job (each job has an ID which is sequential which can be used for ordering if there is some way to use order for lock acquisition).
There are some options to avoid this scenario, each has its advantages and drawbacks:
1.) Use a single lock object for all instances. This approach is simple to implement, but limits you to one thread to aquire the lock. This can be reasonable if the synchronized blocks are short and scalability is not a big issue (e.g. desktop application aka non-server). The main selling point of this is the simplicity in implementation.
2.) Use ordered locking - this means whenever you have to aquire two or more locks, ensure that the order in which they are aquired is the same. Thats much easier said then done and can require heavy changes to the code base.
3.) Get rid of the locks completely. With the java.util.concurrent(.atomic) classes you can implement multithreaded data structures without blocking (usually using compareAndSet-flavor methods). This certainly requires changes to the code base and requires some rethinking of the structures. Usually reqiures a rewrite of critical portions of the code base.
4.) Many problems just disappear when you consequently use immutable types and objects. Combines well with the atomic (3.) approach to implement mutable super-structures (often implemented as copy-on-change).
To give any recommendation one would need to know a lot more details about what is protected by your locks.
--- EDIT ---
I needed a lock-free Set implementation, this code sample illustrates it strengths and weaknesses. I did implement iterator() as a snapshot, implementing it to throw ConcurrentModificationException and support remove() would be a little more complicated and I had no need for it. Some of the referenced utility classes I did not post (I think its completely obvious what the missing referenced pieces do).
I hope its at least a little useful as a starting point how to work with AtomicReferences.
/**
* Helper class that implements a set-like data structure
* with atomic add/remove capability.
*
* Iteration occurs always on a current snapshot, thus
* the iterator will not support remove, but also never
* throw ConcurrentModificationException.
*
* Iteration and reading the set is cheap, altering the set
* is expensive.
*/
public final class AtomicArraySet<T> extends AbstractSet<T> {
protected final AtomicReference<Object[]> reference =
new AtomicReference<Object[]>(Primitives.EMPTY_OBJECT_ARRAY);
public AtomicArraySet() {
}
/**
* Checks if the set contains the element.
*/
#Override
public boolean contains(final Object object) {
final Object[] array = reference.get();
for (final Object element : array) {
if (element.equals(object))
return true;
}
return false;
}
/**
* Adds an element to the set. Returns true if the element was added.
*
* If element is NULL or already in the set, no change is made to the
* set and false is returned.
*/
#Override
public boolean add(final T element) {
if (element == null)
return false;
while (true) {
final Object[] expect = reference.get();
final int length = expect.length;
// determine if element is already in set
for (int i=length-1; i>=0; --i) {
if (expect[i].equals(element))
return false;
}
final Object[] update = new Object[length + 1];
System.arraycopy(expect, 0, update, 0, length);
update[length] = element;
if (reference.compareAndSet(expect, update))
return true;
}
}
/**
* Adds all the given elements to the set.
* Semantically this is the same a calling add() repeatedly,
* but the whole operation is made atomic.
*/
#Override
public boolean addAll(final Collection<? extends T> collection) {
if (collection == null || collection.isEmpty())
return false;
while (true) {
boolean modified = false;
final Object[] expect = reference.get();
int length = expect.length;
Object[] temp = new Object[collection.size() + length];
System.arraycopy(expect, 0, temp, 0, length);
ELoop: for (final Object element : collection) {
if (element == null)
continue;
for (int i=0; i<length; ++i) {
if (element.equals(temp[i])) {
modified |= temp[i] != element;
temp[i] = element;
continue ELoop;
}
}
temp[length++] = element;
modified = true;
}
// check if content did not change
if (!modified)
return false;
final Object[] update;
if (temp.length == length) {
update = temp;
} else {
update = new Object[length];
System.arraycopy(temp, 0, update, 0, length);
}
if (reference.compareAndSet(expect, update))
return true;
}
}
/**
* Removes an element from the set. Returns true if the element was removed.
*
* If element is NULL not in the set, no change is made to the set and
* false is returned.
*/
#Override
public boolean remove(final Object element) {
if (element == null)
return false;
while (true) {
final Object[] expect = reference.get();
final int length = expect.length;
int i = length;
while (--i >= 0) {
if (expect[i].equals(element))
break;
}
if (i < 0)
return false;
final Object[] update;
if (length == 1) {
update = Primitives.EMPTY_OBJECT_ARRAY;
} else {
update = new Object[length - 1];
System.arraycopy(expect, 0, update, 0, i);
System.arraycopy(expect, i+1, update, i, length - i - 1);
}
if (reference.compareAndSet(expect, update))
return true;
}
}
/**
* Removes all entries from the set.
*/
#Override
public void clear() {
reference.set(Primitives.EMPTY_OBJECT_ARRAY);
}
/**
* Gets an estimation how many elements are in the set.
* (its an estimation as it only returns the current size
* and that may change at any time).
*/
#Override
public int size() {
return reference.get().length;
}
#Override
public boolean isEmpty() {
return reference.get().length <= 0;
}
#SuppressWarnings("unchecked")
#Override
public Iterator<T> iterator() {
final Object[] array = reference.get();
return (Iterator<T>) ArrayIterator.get(array);
}
#Override
public Object[] toArray() {
final Object[] array = reference.get();
return Primitives.cloneArray(array);
}
#SuppressWarnings("unchecked")
#Override
public <U extends Object> U[] toArray(final U[] array) {
final Object[] content = reference.get();
final int length = content.length;
if (array.length < length) {
// Make a new array of a's runtime type, but my contents:
return (U[]) Arrays.copyOf(content, length, array.getClass());
}
System.arraycopy(content, 0, array, 0, length);
if (array.length > length)
array[length] = null;
return array;
}
}
The answer to any deadlock is to acquire the same locks in the same order. You'll just have to figure out a way to do that.
I have an interesting problem I would like some help with. I have implemented a couple of queues for two separate conditions, one based on FIFO and the other natural order of a key (ConcurrentMap). That is you can image both queues have the same data just ordered differently. The question I have (and I am looking for an efficient way of doing this) if I find the key in the ConcurrentMap based on some criteria, what is the best way of finding the "position" of the key in the FIFO map. Essentially I would like to know whether it is the firstkey (which is easy), or say it is the 10th key.
Any help would be greatly appreciated.
There is no API for accessing the order in a FIFO map. The only way you can do it is iterate over keySet(), values() or entrySet() and count.
I believe something like the code below will do the job. I've left the implementation of element --> key as an abstract method. Note the counter being used to assign increasing numbers to elements. Also note that if add(...) is being called by multiple threads, the elements in the FIFO are only loosely ordered. That forces the fancy max(...) and min(...) logic. Its also why the position is approximate. First and last are special cases. First can be indicated clearly. Last is tricky because the current implementation returns a real index.
Since this is an approximate location, I would suggest you consider making the API return a float between 0.0 and 1.0 to indicate relative position in the queue.
If your code needs to support removal using some means other than pop(...), you will need to use approximate size, and change the return to ((id - min) / (max - min)) * size, with all the appropriate int / float casting & rounding.
public abstract class ApproximateLocation<K extends Comparable<K>, T> {
protected abstract K orderingKey(T element);
private final ConcurrentMap<K, Wrapper<T>> _map = new ConcurrentSkipListMap<K, Wrapper<T>>();
private final Deque<Wrapper<T>> _fifo = new LinkedBlockingDeque<Wrapper<T>>();
private final AtomicInteger _counter = new AtomicInteger();
public void add(T element) {
K key = orderingKey(element);
Wrapper<T> wrapper = new Wrapper<T>(_counter.getAndIncrement(), element);
_fifo.add(wrapper);
_map.put(key, wrapper);
}
public T pop() {
Wrapper<T> wrapper = _fifo.pop();
_map.remove(orderingKey(wrapper.value));
return wrapper.value;
}
public int approximateLocation(T element) {
Wrapper<T> wrapper = _map.get(orderingKey(element));
Wrapper<T> first = _fifo.peekFirst();
Wrapper<T> last = _fifo.peekLast();
if (wrapper == null || first == null || last == null) {
// element is not in composite structure; fifo has not been written to yet because of concurrency
return -1;
}
int min = Math.min(wrapper.id, Math.min(first.id, last.id));
int max = Math.max(wrapper.id, Math.max(first.id, last.id));
if (wrapper == first || max == min) {
return 0;
}
if (wrapper == last) {
return max - min;
}
return wrapper.id - min;
}
private static class Wrapper<T> {
final int id;
final T value;
Wrapper(int id, T value) {
this.id = id;
this.value = value;
}
}
}
If you can use a ConcurrentNavigableMap, the size of the headMap gives you exactly what you want.