I want to subtract two ArrayLists so I can have the child that are not in the other list.
I do it this way:
removeIDs=(ArrayList<Integer>) storedIDs.clone();
removeIDs.removeAll(downloadedIDs);
downloadIDs=(ArrayList<Integer>) downloadedIDs.clone();
downloadIDs.removeAll(storedIDs);
The Problem is that both lists contain 5000childs and it takes almost 4 seconds on my androidphone.
Is there a fast way to do this?
Is using sets faster?(i dont have duplicate values in the lists)
I develop an android app
Use HashSet instead of ArrayList unless you need to keep the order.
Removing an element requires a scan of the full List for list implementations, a HashSet by comparison is just the calculation of a hash code and then identification of a target bucket.
Sets should be must faster. Right now, it's basically doing an n^2 loop. It loops over every element in removeIDs and checks to see if that id is in downloadedIDs, which requires searching the whole list. If downloadedIDs were stored in something faster for searching, like a HashSet, this would be much faster and become an O(n) instead of O(n^2). There might also be something faster in the Collections API, but I don't know it.
If you need to preserver ordering, you can use a LinkedHashSet instead of a normal HashSet but it will add some memory overheard and a bit of a performance hit for inserting/removing elements.
I agree with the HashSet recommendation unless the Integer IDs fit in a relatively small range. In that case, I would benchmark using each of HashSet and BitSet, and actually use whichever is faster for your data in your environment.
First of all I am giving my apology for the long answer. If I am wrong at any point you are always welcome to correct me. Here I am comparing some options of solving the solution
OPTION 1 < ArrayList >:
In your code you used the ArrayList.removeAll method lets look in to the code of removeAll
the source code of removeAll
public boolean removeAll(Collection<?> c) {
return batchRemove(c, false);
}
so need to know what is in batchRemove method. Here it is link. The key part here if you can see
for (; r < size; r++)
if (c.contains(elementData[r]) == complement)
elementData[w++] = elementData[r];
now lets look into the contains method which is just a wrapper of indexOf method. link. In the indexOf method there is a O(n) operation. (noting just a part here)
for (int i = 0; i < size; i++)
if (elementData[i]==null)
return i;
So over all it is a
O(n^2)
operations in the removeAll
OPTION 2 < HashSet >:
previously I wrote something in here but it seems I was wrong at some point so removing this. Better take suggestion from expert about Hashset. I am not sure in your case whether hashmap will be a better solution. So I am proposing another solution
OPTION 3 < My Suggestion You can try>:
step 1: if your data is sorted then no need of this step else sort the list which you will subtract(second list)
step 2: for every element of unsorted list run a binary search in the second list
step 3: if no match found then store in another result list but if match found then dont add
step 4: result list is your final answer
Cost of option 3:
step 1: if not sorted O(nlogn) time
step 2: O(nlogn) time
step 3: O(n) space
**
so overall O(nlogn) time and O(n) space
**
If a list is required, you can choose a LinkedList. In your case, as #Chris said, the ArrayList implementation will move all the elements in each removal.
With the LinkedList you would get a much better performance for random adding/removing. See this post.
Related
What is the best performance method in Java (7,8) to eliminate integer elements of one Arraylist from another. All the elements are unique in the first and second lists.
At the moment I know the API method removeall and use it this way:
tempList.removeAll(tempList2);
The problem appears when I operate with arraylists have more than 10000 elements. For example when I remove 65000 elements, the delay appears to be about 2 seconds. But I need to opperate with even more large lists with more than 1000000 elements.
What is the strategy for this issue?
Maybe something with new Stream API should solve it?
tl;dr:
Keep it simple. Use
list.removeAll(new HashSet<T>(listOfElementsToRemove));
instead.
As Eran already mentioned in his answer: The low performance stems from the fact that the pseudocode of a generic removeAll implementation is
public boolean removeAll(Collection<?> c) {
for (each element e of this) {
if (c.contains(e)) {
this.remove(e);
}
}
}
So the contains call that is done on the list of elements to remove will cause the O(n*k) performance (where n is the number of elements to remove, and k is the number of elements in the list that the method is called on).
Naively, one could imagine that the this.remove(e) call on a List might also have O(k), and this implementation would also have quadratic complexity. But this is not the case: You mentioned that the lists are specifically ArrayList instances. And the ArrayList#removeAll method is implemented to delegate to a method called batchRemove that directly operates on the underlying array, and does not remove the elements individually.
So all you have to do is to make sure that the lookup in the collection that contains the elements to remove is fast - preferably O(1). This can be achieved by putting these elements into a Set. In the end, it can just be written as
list.removeAll(new HashSet<T>(listOfElementsToRemove));
Side notes:
The answer by Eran has IMHO two major drawbacks: First of all, it requires sorting the lists, which is O(n*logn) - and it's simply not necessary. But more importantly (and obviously) : Sorting will likely change the order of the elements! What if this is simply not desired?
Remotely related: There are some other subtleties involved in the removeAll implementations. For example, HashSet removeAll method is surprisingly slow in some cases. Although this also boils down to the O(n*n) when the elements to be removed are stored in a list, the exact behavior may indeed be surprising in this particular case.
Well, since removeAll checks for each element of tempList whether it appears in tempList2, the running time is proportional to the size of the first list multiplied by the size of the second list, which means O(N^2) unless one of the two lists is very small and can be considered as "constant size".
If, on the other hand, you pre-sort the lists, and then iterate over both lists with a single iteration (similar to the merge step in merge sort), the sorting will take O(NlogN) and the iteration O(N), giving you a total running time of O(NlogN). Here N is the size of the larger of the two lists.
If you can replace the lists by a sorted structure (perhaps a TreeSet, since you said the elements are unique), you can implement removeAll in linear time, since you won't have to do any sorting.
I haven't tested it, but something like this can work (assuming both tempList and tempList2 are sorted) :
Iterator<Integer> iter1 = tempList.iterator();
Iterator<Integer> iter2 = tempList2.iterator();
Integer current = null;
Integer current2 = null;
boolean advance = true;
while (iter1.hasNext() && iter2.hasNext()) {
if (advance) {
current = iter1.next();
advance = false;
}
if (current2 == null || current > current2) {
current2 = iter2.next();
}
if (current <= current2) {
advance = true;
if (current == current2)
iter1.remove();
}
}
I suspect removing from an ArrayList, is a perfromance hit since the list may either be divided when an element in the middle is removed, or if the list must be compacted after an element is removed. It may be faster to do this:
Create 'Set' of the elements to be removed
Create a new result ArrayList that you need, call it R. You can give it enough size at construction.
Iterate thru the original list you need elements from it removed, if the element is found in the Set, don't add it to R, otherwise add it.
This should have O(N); if creating the Set and a lookup in it is assumed constant.
TreeMap has O(log n) performance (best case), however, since I need the following operations efficiently:
get the highest element
get XY highest elements
insert
Other possibility would be to make a PriorityQueue with the following:
use "index" element as order for PriorityQueue
equals implementation to check only "index" element equality
but this would be a hack since "equals" method would be error prone (if used outside of PriorityQueue).
Any better structure for this?
More details below which you might skip since the first answer provided good answer for this specifics, however, I'm keeping it active for the theoretical discussion.
NOTE: I could use non standard data structures, in this project I'm already using UnrolledLinkedList since it most likely would be the most efficient structure for another use.
THIS IS USE CASE (in case you are interesting): I'm constructing AI for a computer game where
OffensiveNessHistory myOffensiveNess = battle.pl[orderNumber].calculateOffensivenessHistory();
With possible implementations:
public class OffensiveNessHistory {
PriorityQueue<OffensiveNessHistoryEntry> offensivenessEntries = new PriorityQueue<OffensiveNessHistoryEntry>();
..
or
public class OffensiveNessHistory {
TreeMap<Integer, OffensiveNessHistoryEntry> offensivenessEntries = new TreeMap();
..
I want to check first player offensiveness and defensiveness history to calculate the predict if I should play the most offensive or the most defensive move.
First, you should think about the size of the structure (optimizing for just a few entries might not be worth it) and the frequency of the operations.
If reads are more frequent than writes (which I assume is the case), I'd use a structure that optimizes for reads on the cost of inserts, e.g. a sorted ArrayList where you insert at a position found using a binary search. This would be O(log n) for the search + the cost of moving other entries to the right but would mean good cache coherence and O(1) lookups.
A standard PriorityQueue internally also uses an array, but would require you to use an iterator to get element n (e.g. if you'd at point need the median or the lowest entry).
There might be strutures that optimize write even more while keeping O(1) reads but unless those writes are very frequent you might not even notice any performance gains.
Finally and foremost, you should try not to optimize on guesses but profile first. There might be other parts of your code that might eat up performance and which might render optimization of the datastructures rather useless.
Problem
I'm writing a simple Java program in which I have a TreeSet which contains Comparable elements (it's a class that I've written myself). In a specific moment I need to take only the first k elements from it.
What I've done
Currently I've found two different solution for my problem:
Using a simple method written by me; It copies the first k elements from the initial TreeSet;
Use Google Guava greatestOf method.
For the second option you need to call the method in this way:
Ordering.natural().greatestOf(mySet, 80))
But I think that it's useless to use this kind of invocation because the elements are already sorted. Am I wrong?
Question
I want to ask here which is a correct and, at the same time, efficient method to obtain a Collection derived class which contains the first k elements of a TreeSet?
Additional information
Java version: >= 7
You could use Guava's Iterables#limit:
ImmutableList.copyOf(Iterables.limit(yourSet, 7))
http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/collect/Iterables.html#limit(java.lang.Iterable, int)
I would suggest you to use a TreeSet<YourComparableClass> collection, it seems to be the solution you are looking for.
A TreeSet can return you an iterator, and you can simply iterates K times, by storing the objects the iterator returns you: the elements will be returned you in order.
Moreover a TreeSet keep your elements always sorted: at any time, when you add or remove elements, they are inserted and removed so that the structure remains ordered.
Here a possible example:
public static ArrayList<YourComparableClass> getFirstK(TreeSet<YourComparableClass> set, int k) {
Iterator<YourComparableClass> iterator = set.iterator();
ArrayList<YourComparableClass> result = new ArrayList<>(k); //to store first K items
for (int i=0;i<k;i++) result.add(iterator.next()); //iterator returns items in order
//you should also check iterator.hasNext(); if you are not sure to have always a K<set.size()
return result;
}
The descendingIterator() method of java.util.TreeSet yields elements from greatest to least, so you can just step it however many times, inserting the elements into a collection. The running time is O(log n + k) where k is the number of elements returned, which is surely fast enough.
If you're using a HashSet, on the other hand, then the elements in fact are not sorted, so you need to use the linear-time selection method that you indicated.
well I know it is very novice question, but nothing is getting into my mind. Currently I am trying this, but it is the least efficient way for such a big number. Help me anyone.
int count = 66000000;
LinkedList<Integer> list = new LinkedList<Integer>();
for (int i=1;i<=count;i++){
list.add(i);
//System.out.println(i);
}
EDIT:
Actually I have o perform operation on whole list(queue) repeatedly (say on a condition remove some elements and add again), so having to iterate whole list became so slow what with such number it took more than 10min.
the size of your output is O(n) therefore it's literally impossible to have an algorithm that populates your list any more efficient than O(n) time complexity.
You're spending a whole lot more time just printing your numbers to the screen than you actually are spending generating the list. If you really want to speed this code up, remove the
System.out.println(i);
On a separate note, I've noticed that you're using a LinkedList, If you used an array(or array-based list) it should be faster.
You could implement a List where the get(int index) method simply returns the index (or some value based on the index). The creation of the list would then be constant time (O(1)). The list would have to be immutable.
Your question isn't just about building the list, it includes deletion and re-insertion. I suspect you should be using a HashSet, maybe even a BitSet instead of a List of any kind.
I have a HashSet of strings and an array of strings. I want to find out if any of the elements in the array exists in the HashSet. I have the following code that work, but I feel that it could be done faster.
public static boolean check(HashSet<String> group, String elements[]){
for(int i = 0; i < elements.length; i++){
if(group.contains(elements[i]))
return true;
}
return false;
}
Thanks.
It's O(n) in this case (array is used), it cannot be faster.
If you just want to make the code cleaner:
return !Collections.disjoint(group, Arrays.asList(elements));
That seems somewhat reasonable. HashSet has an O(1) (usually) contains() since it simply has to hash the string you give it to find an index, and there is either something there or there isn't.
If you need to check each element in your array, there simply isn't any faster way to do it (sequentially, of course).
... but I feel that it could be done faster.
I don't think there is a faster way. Your code is O(N) on average, where N is the number of strings in the array. I don't think that you can improve on that.
As others have said, the slowest part of the algorithm is iterating over every element of the array. The only way you could make it faster would be if you knew some information about the contents of the array beforehand which allowed you to skip over certain elements, like if the array was sorted and had duplicates in known positions or something. If the input is essentially random, then there's not a lot you can do.
If you know that the set is a sorted set, and that the array is sorted, you can get the interval set from the start to the end to possibly do better than O(|array| * access-time(set)), and which especially allows for some better than O(|array|) negative results, but if you're hashing you can't.