What is the best performance method in Java (7,8) to eliminate integer elements of one Arraylist from another. All the elements are unique in the first and second lists.
At the moment I know the API method removeall and use it this way:
tempList.removeAll(tempList2);
The problem appears when I operate with arraylists have more than 10000 elements. For example when I remove 65000 elements, the delay appears to be about 2 seconds. But I need to opperate with even more large lists with more than 1000000 elements.
What is the strategy for this issue?
Maybe something with new Stream API should solve it?
tl;dr:
Keep it simple. Use
list.removeAll(new HashSet<T>(listOfElementsToRemove));
instead.
As Eran already mentioned in his answer: The low performance stems from the fact that the pseudocode of a generic removeAll implementation is
public boolean removeAll(Collection<?> c) {
for (each element e of this) {
if (c.contains(e)) {
this.remove(e);
}
}
}
So the contains call that is done on the list of elements to remove will cause the O(n*k) performance (where n is the number of elements to remove, and k is the number of elements in the list that the method is called on).
Naively, one could imagine that the this.remove(e) call on a List might also have O(k), and this implementation would also have quadratic complexity. But this is not the case: You mentioned that the lists are specifically ArrayList instances. And the ArrayList#removeAll method is implemented to delegate to a method called batchRemove that directly operates on the underlying array, and does not remove the elements individually.
So all you have to do is to make sure that the lookup in the collection that contains the elements to remove is fast - preferably O(1). This can be achieved by putting these elements into a Set. In the end, it can just be written as
list.removeAll(new HashSet<T>(listOfElementsToRemove));
Side notes:
The answer by Eran has IMHO two major drawbacks: First of all, it requires sorting the lists, which is O(n*logn) - and it's simply not necessary. But more importantly (and obviously) : Sorting will likely change the order of the elements! What if this is simply not desired?
Remotely related: There are some other subtleties involved in the removeAll implementations. For example, HashSet removeAll method is surprisingly slow in some cases. Although this also boils down to the O(n*n) when the elements to be removed are stored in a list, the exact behavior may indeed be surprising in this particular case.
Well, since removeAll checks for each element of tempList whether it appears in tempList2, the running time is proportional to the size of the first list multiplied by the size of the second list, which means O(N^2) unless one of the two lists is very small and can be considered as "constant size".
If, on the other hand, you pre-sort the lists, and then iterate over both lists with a single iteration (similar to the merge step in merge sort), the sorting will take O(NlogN) and the iteration O(N), giving you a total running time of O(NlogN). Here N is the size of the larger of the two lists.
If you can replace the lists by a sorted structure (perhaps a TreeSet, since you said the elements are unique), you can implement removeAll in linear time, since you won't have to do any sorting.
I haven't tested it, but something like this can work (assuming both tempList and tempList2 are sorted) :
Iterator<Integer> iter1 = tempList.iterator();
Iterator<Integer> iter2 = tempList2.iterator();
Integer current = null;
Integer current2 = null;
boolean advance = true;
while (iter1.hasNext() && iter2.hasNext()) {
if (advance) {
current = iter1.next();
advance = false;
}
if (current2 == null || current > current2) {
current2 = iter2.next();
}
if (current <= current2) {
advance = true;
if (current == current2)
iter1.remove();
}
}
I suspect removing from an ArrayList, is a perfromance hit since the list may either be divided when an element in the middle is removed, or if the list must be compacted after an element is removed. It may be faster to do this:
Create 'Set' of the elements to be removed
Create a new result ArrayList that you need, call it R. You can give it enough size at construction.
Iterate thru the original list you need elements from it removed, if the element is found in the Set, don't add it to R, otherwise add it.
This should have O(N); if creating the Set and a lookup in it is assumed constant.
Related
I want to take out and remove first element from the List. I can see, I have two options:
First Approach:
LinkedList<String> servers = new LinkedList<String>();
....
String firstServerName = servers.removeFirst();
Second Approach
ArrayList<String> servers = new ArrayList<String>();
....
String firstServerName = servers.remove(0);
I have lot of elements in my list.
Is there any preference which one we should use?
And what is the difference between the above two? Are they technically same thing in terms of performance? What is the complexity involve here if we have lot of elements?
What is the most efficient way to do this.
If the comparison for "remove first" is between the ArrayList and the LinkedList classes, the LinkedList wins clearly.
Removing an element from a linked list costs O(1), while doing so for an array (array list) costs O(n).
You should use LinkedList.
Background:
In practical terms, LinkedList#removeFirst is more efficient since it's operating over a doubly-linked list and the removal of first element basically consist in only unlinking it from the head of the list and update the next element to be the first one (complexity O(1)):
private E unlinkFirst(Node<E> f) {
// assert f == first && f != null;
final E element = f.item;
final Node<E> next = f.next;
f.item = null;
f.next = null; // help GC
first = next;
if (next == null)
last = null;
else
next.prev = null;
size--;
modCount++;
return element;
}
ArrayList#remove is operating over an internal array structure which requires moving all subsequent elements one position to the left by copying over the subarray (complexity O(n)):
public E remove(int index) {
rangeCheck(index);
modCount++;
E oldValue = elementData(index);
int numMoved = size - index - 1;
if (numMoved > 0)
System.arraycopy(elementData, index+1, elementData, index,
numMoved);
elementData[--size] = null; // clear to let GC do its work
return oldValue;
}
Additional answer:
On the other hand, LinkedList#get operation requires traversing half of the entire list for retrieving an element at a specified index - in worst case scenario. ArrayList#get would directly access the element at the specified index since it's operating over an array.
My rule of thumbs for efficiency here would be:
Use LinkedList if you do a lot of add/remove in comparison with
access operations (e.g.: get);
Use ArrayList if you do a lot
of access operations in comparison with add/remove.
Make sure you understand the difference between LinkedList and ArrayList. ArrayList is implemented using Array.
LinkedList takes constant time to remove an element.
ArrayList might take linear time to remove the first element (to confirm I need to check the implementation, not java expert here).
Also I think LinkedList is more efficient in terms of space. Because ArrayList would not (and should not) re-size the array every time an element is removed, it takes up more space than needed.
I think that what you need is an ArrayDeque (an unfairly overlooked class in java.util). Its removeFirst method performs in O(1) as for LinkedList, while it generally shows the better space and time characteristics of ArrayList. It’s implemented as a circular queue in an array.
You should very rarely use LinkedList. I did once in my 17 years as Java programmer and regretted in retrospect.
List.subList​(int fromIndex, int toIndex)
Returns a view of the portion of this list between the specified fromIndex, inclusive, and toIndex, exclusive.
Good to use for ArrayList where removing the first element has complexity O(n).
final String firstServerName = servers.get(0);
servers = servers.subList(1, servers.size());
Removing the first element of an ArrayList is O(n). For the linked list is O(1), so I'll go with that.
Check the ArrayList documentation
The size, isEmpty, get, set, iterator, and listIterator operations run
in constant time. The add operation runs in amortized constant time,
that is, adding n elements requires O(n) time. All of the other
operations run in linear time (roughly speaking). The constant factor
is low compared to that for the LinkedList implementation.
This guys actually got the OpenJDK source link
Using a linked list is by far faster.
LinkedList
It will just reference the nodes so the first one disappears.
ArrayList
With an Array List it has to move all elements back one spot to keep the underlying array proper.
As others have rightly pointed out, LinkedList is faster than ArrayList for removal of the first element from anything other than a very short list.
However, to make your choice between them you need to consider the complete mix of operations. For example, if your workload does millions of indexed accesses to a hundred element list for each first element removal, ArrayList will be better overall.
Third Aproach.
It is exposed by the java.util.Queue interface. LinkedList is an implementation of this interface.
The Queue interface is exposing the E poll() method which effectively removes the head of the List (Queue).
In terms of performance the poll() method is comparable to removeFirst(). Actually it is using under the hood the removeFirst() method.
Problem
I'm writing a simple Java program in which I have a TreeSet which contains Comparable elements (it's a class that I've written myself). In a specific moment I need to take only the first k elements from it.
What I've done
Currently I've found two different solution for my problem:
Using a simple method written by me; It copies the first k elements from the initial TreeSet;
Use Google Guava greatestOf method.
For the second option you need to call the method in this way:
Ordering.natural().greatestOf(mySet, 80))
But I think that it's useless to use this kind of invocation because the elements are already sorted. Am I wrong?
Question
I want to ask here which is a correct and, at the same time, efficient method to obtain a Collection derived class which contains the first k elements of a TreeSet?
Additional information
Java version: >= 7
You could use Guava's Iterables#limit:
ImmutableList.copyOf(Iterables.limit(yourSet, 7))
http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/collect/Iterables.html#limit(java.lang.Iterable, int)
I would suggest you to use a TreeSet<YourComparableClass> collection, it seems to be the solution you are looking for.
A TreeSet can return you an iterator, and you can simply iterates K times, by storing the objects the iterator returns you: the elements will be returned you in order.
Moreover a TreeSet keep your elements always sorted: at any time, when you add or remove elements, they are inserted and removed so that the structure remains ordered.
Here a possible example:
public static ArrayList<YourComparableClass> getFirstK(TreeSet<YourComparableClass> set, int k) {
Iterator<YourComparableClass> iterator = set.iterator();
ArrayList<YourComparableClass> result = new ArrayList<>(k); //to store first K items
for (int i=0;i<k;i++) result.add(iterator.next()); //iterator returns items in order
//you should also check iterator.hasNext(); if you are not sure to have always a K<set.size()
return result;
}
The descendingIterator() method of java.util.TreeSet yields elements from greatest to least, so you can just step it however many times, inserting the elements into a collection. The running time is O(log n + k) where k is the number of elements returned, which is surely fast enough.
If you're using a HashSet, on the other hand, then the elements in fact are not sorted, so you need to use the linear-time selection method that you indicated.
well I know it is very novice question, but nothing is getting into my mind. Currently I am trying this, but it is the least efficient way for such a big number. Help me anyone.
int count = 66000000;
LinkedList<Integer> list = new LinkedList<Integer>();
for (int i=1;i<=count;i++){
list.add(i);
//System.out.println(i);
}
EDIT:
Actually I have o perform operation on whole list(queue) repeatedly (say on a condition remove some elements and add again), so having to iterate whole list became so slow what with such number it took more than 10min.
the size of your output is O(n) therefore it's literally impossible to have an algorithm that populates your list any more efficient than O(n) time complexity.
You're spending a whole lot more time just printing your numbers to the screen than you actually are spending generating the list. If you really want to speed this code up, remove the
System.out.println(i);
On a separate note, I've noticed that you're using a LinkedList, If you used an array(or array-based list) it should be faster.
You could implement a List where the get(int index) method simply returns the index (or some value based on the index). The creation of the list would then be constant time (O(1)). The list would have to be immutable.
Your question isn't just about building the list, it includes deletion and re-insertion. I suspect you should be using a HashSet, maybe even a BitSet instead of a List of any kind.
I want to subtract two ArrayLists so I can have the child that are not in the other list.
I do it this way:
removeIDs=(ArrayList<Integer>) storedIDs.clone();
removeIDs.removeAll(downloadedIDs);
downloadIDs=(ArrayList<Integer>) downloadedIDs.clone();
downloadIDs.removeAll(storedIDs);
The Problem is that both lists contain 5000childs and it takes almost 4 seconds on my androidphone.
Is there a fast way to do this?
Is using sets faster?(i dont have duplicate values in the lists)
I develop an android app
Use HashSet instead of ArrayList unless you need to keep the order.
Removing an element requires a scan of the full List for list implementations, a HashSet by comparison is just the calculation of a hash code and then identification of a target bucket.
Sets should be must faster. Right now, it's basically doing an n^2 loop. It loops over every element in removeIDs and checks to see if that id is in downloadedIDs, which requires searching the whole list. If downloadedIDs were stored in something faster for searching, like a HashSet, this would be much faster and become an O(n) instead of O(n^2). There might also be something faster in the Collections API, but I don't know it.
If you need to preserver ordering, you can use a LinkedHashSet instead of a normal HashSet but it will add some memory overheard and a bit of a performance hit for inserting/removing elements.
I agree with the HashSet recommendation unless the Integer IDs fit in a relatively small range. In that case, I would benchmark using each of HashSet and BitSet, and actually use whichever is faster for your data in your environment.
First of all I am giving my apology for the long answer. If I am wrong at any point you are always welcome to correct me. Here I am comparing some options of solving the solution
OPTION 1 < ArrayList >:
In your code you used the ArrayList.removeAll method lets look in to the code of removeAll
the source code of removeAll
public boolean removeAll(Collection<?> c) {
return batchRemove(c, false);
}
so need to know what is in batchRemove method. Here it is link. The key part here if you can see
for (; r < size; r++)
if (c.contains(elementData[r]) == complement)
elementData[w++] = elementData[r];
now lets look into the contains method which is just a wrapper of indexOf method. link. In the indexOf method there is a O(n) operation. (noting just a part here)
for (int i = 0; i < size; i++)
if (elementData[i]==null)
return i;
So over all it is a
O(n^2)
operations in the removeAll
OPTION 2 < HashSet >:
previously I wrote something in here but it seems I was wrong at some point so removing this. Better take suggestion from expert about Hashset. I am not sure in your case whether hashmap will be a better solution. So I am proposing another solution
OPTION 3 < My Suggestion You can try>:
step 1: if your data is sorted then no need of this step else sort the list which you will subtract(second list)
step 2: for every element of unsorted list run a binary search in the second list
step 3: if no match found then store in another result list but if match found then dont add
step 4: result list is your final answer
Cost of option 3:
step 1: if not sorted O(nlogn) time
step 2: O(nlogn) time
step 3: O(n) space
**
so overall O(nlogn) time and O(n) space
**
If a list is required, you can choose a LinkedList. In your case, as #Chris said, the ArrayList implementation will move all the elements in each removal.
With the LinkedList you would get a much better performance for random adding/removing. See this post.
I had an interview today, and they gave me:
List A has:
f
google
gfk
fat
...
List B has:
hgt
google
koko
fat
ffta
...
They asked me to merge these two list in one sorted List C.
What I said:
I added List B to List A, then I created a Set from List A, then create a List from the Set. The interviewer told me the lists are big, and this method will not be good for performance, he said it will be a nlog(n).
What would be a better approach to this problem?
Well your method would require O(3N) additional space (the concatenated List, the Set and the result List), which is its main inefficiency.
I would sort ListA and ListB with whatever sorting algorithm you choose (QuickSort is in-place requiring O(1) space; I believe Java's default sort strategy is MergeSort which typically requires O(N) additional space), then use a MergeSort-like algorithm to examine the "current" index of ListA to the current index of ListB, insert the element that should come first into ListC, and increment that list's "current" index count. Still NlogN but you avoid multiple rounds of converting from collection to collection; this strategy only uses O(N) additional space (for ListC; along the way you'll need N/2 space if you MergeSort the source lists).
IMO the lower bound for an algorithm to do what the interviewer wanted would be O(NlogN). While the best solution would have less additional space and be more efficient within that growth model, you simply can't sort two unsorted lists of strings in less than NlogN time.
EDIT: Java's not my forte (I'm a SeeSharper by trade), but the code would probably look something like:
Collections.sort(listA);
Collections.sort(listB);
ListIterator<String> aIter = listA.listIterator();
ListIterator<String> bIter = listB.listIterator();
List<String> listC = new List<String>();
while(aIter.hasNext() || bIter.hasNext())
{
if(!bIter.hasNext())
listC.add(aIter.next());
else if(!aIter.hasNext())
listC.add(bIter.next());
else
{
//kinda smells from a C# background to mix the List and its Iterator,
//but this avoids "backtracking" the Iterators when their value isn't selected.
String a = listA[aIter.nextIndex()];
String b = listB[bIter.nextIndex()];
if(a==b)
{
listC.add(aIter.next());
listC.add(bIter.next());
}
else if(a.CompareTo(b) < 0)
listC.add(aIter.next());
else
listC.add(bIter.next());
}
}