Using removeAll() or something else to compare two lists - java

I have two lists, let's call them list A and list B. Both of these lists contain names and there are no duplicates (they are unique values). Every name in list B can be found in list A. I want to find which names are missing from list B in order to insert those missing names into a database. Basic example:
List<String> a = new ArrayList<>(Arrays.asList("name1", "name2", "name3", "name4","name5","name6"));
List<String> b = new ArrayList<>(Arrays.asList("name1", "name2", "name4", "name6"));
a.removeAll(b);
//iterate through and insert into my database here
From what I've searched removeAll() seems to be a go-to answer. In my case, I am dealing with a wide range of possible quantities. It could be anywhere between 500 to 50,000 names. Will removeAll() suffice for this? I've read that removeAll() is O(n^2) which may not be a problem with very small quantities, but with larger quantities, it sounds like it could be. I'd imagine it also depends on the user's patience as to when it would be considered a problem? Ultimately I'm wondering if there is a better way to do this without adding a huge amount of complexity as I do appreciate simplicity (to a point).

If the only thing you're doing with these lists is inserting them into a database, you shouldn't really care about the order of the elements. You could use HashSets instead of ArrayLists and get an O(n) performance instead of O(n2). As a side bonus, using a Set will ensure the values in a and b are really unique.

50000 is a very small amount of data. Unless you're doing this repeatedly, anything reasonable would likely be good enough.
One way to implement this:
List<String> a = new ArrayList<>(Arrays.asList("name1", "name2", "name3", "name4","name5","name6"));
List<String> b = new HashSet<>(Arrays.asList("name1", "name2", "name4", "name6"));
for (String s : a) {
if (!b.contains(s)) {
insertToDb(s);
}
}
or using Stream API in Java 8:
List result = a.stream().filter(s -> !b.contains(s)).collect(Collectors.toList());
// alternatively: Set result = a.stream().filter(s -> !b.contains(s)).collect(Collectors.toSet());

Related

Cull all duplicates in a set

I'm using Set to isolate the unique values of a List (in this case, I'm getting a set of points):
Set<PVector> pointSet = new LinkedHashSet<PVector>(listToCull);
This will return a set of unique points, but for every item in listToCull, I'd like to test the following: if there is a duplicate, cull all of the duplicate items. In other words, I want pointSet to represent the set of items in listToCull which are already unique (every item in pointSet had no duplicate in listToCull). Any ideas on how to implement?
EDIT - I think my first question needs more clarification. Below is some code which will execute what I'm asking for, but I'd like to know if there is a faster way. Assuming listToCull is a list of PVectors with duplicates:
Set<PVector> pointSet = new LinkedHashSet<PVector>(listToCull);
List<PVector> uniqueItemsInListToCull = new ArrayList<PVector>();
for(PVector pt : pointSet){
int counter=0;
for(PVector ptCheck : listToCull){
if(pt==ptCheck){
counter++;
}
}
if(counter<2){
uniqueItemsInListToCull.add(pt);
}
}
uniqueItemsInListToCull will be different from pointSet. I'd like to do this without loops if possible.
You will have to do some programming yourself: Create two empty sets; on will contain the unique elements, the other the duplicates. Then loop through the elements of listToCull. For each element, check whether it is in the duplicate set. If it is, ignore it. Otherwise, check if it is in the unique element set. If it is, remove it there and add to the duplicates set. Otherwise, add it to the unique elements set.
If your PVector class has a good hashCode() method, HashSets are quite efficient, so the performance of this will not be too bad.
Untested:
Set<PVector> uniques = new HashSet<>();
Set<PVector> duplicates = new HashSet<>();
for (PVector p : listToCull) {
if (!duplicates.contains(p)) {
if (uniques.contains(p)) {
uniques.remove(p);
duplicates.add(p);
}
else {
uniques.add(p);
}
}
}
Alternatively, you may use a third-party library which offers a Bag or MultiSet. This allows you to count how many occurrences of each element are in the collection, and then at the end discard all elements where the count is different than 1.
What you are looking for is the intersection:
Assuming that PVector (terrible name by the way) implements hashCode() and equals() correctly a Set will eliminate duplicates.
If you want a intersection of the List and an existing Set create a Set from the List then use Sets.intersection() from Guava to get the ones common to both sets.
public static <E> Sets.SetView<E> intersection(Set<E> set1, Set<?> set2)
Returns an unmodifiable view of the intersection of two sets. The returned set contains all
elements that are contained by both backing sets. The iteration order
of the returned set matches that of set1. Results are undefined if
set1 and set2 are sets based on different equivalence relations (as
HashSet, TreeSet, and the keySet of an IdentityHashMap all are).
Note: The returned view performs slightly better when set1 is the
smaller of the two sets. If you have reason to believe one of your
sets will generally be smaller than the other, pass it first.
Unfortunately, since this method sets the generic type of the returned
set based on the type of the first set passed, this could in rare
cases force you to make a cast, for example:
Set aFewBadObjects = ... Set manyBadStrings =
...
// impossible for a non-String to be in the intersection
SuppressWarnings("unchecked") Set badStrings = (Set)
Sets.intersection(
aFewBadObjects, manyBadStrings); This is unfortunate, but should come up only very rarely.
You can also do union, complement, difference and cartesianProduct as well as filtering very easily.
So you want pointSet to hold the items in listToCull which have no duplicates? Is that right?
I would be inclined to create a Map, then iterate twice over the list, the first time putting a value of zero in for each PVector, the second time adding one to the value for each PVector, so at the end you have a map with counts. Now you're interested in the keys of the map for which the value is exactly equal to one.
It's not perfectly efficient - you're operating on list items more times than absolutely necessary - but it's quite clean and simple.
OK, here's the solution I've come up with, I'm sure there are better ones out there but this one's working for me. Thanks to all who gave direction!
To get unique items, you can run a Set, where listToCull is a list of PVectors with duplicates:
List<PVector> culledList = new ArrayList<PVector>();
Set<PVector> pointSet = new LinkedHashSet<PVector>(listToCull);
culledList.addAll(pointSet);
To go further, suppose you want a list where you've removed all items in listToCull which have a duplicate. You can iterate through the list and test whether it's in the set for each item. This let's us do one loop, rather than a nested loop:
Set<PVector> pointSet = new HashSet<PVector>(listToCull);
Set<PVector> removalList = new HashSet<PVector>();//list to remove
for (PVector pt : listToCull) {
if (pointSet.contains(pt)) {
removalList.add(pt);
}
else{
pointSet.add(pt);
}
}
pointSet.removeAll(removalList);
List<PVector> onlyUniquePts = new ArrayList<PVector>();
onlyUniquePts.addAll(pointSet);

HashMap<String, Set<String>> cant find its key even though it exists

I have a HashMap of this type
Map<String, Set<String>> list_names = new HashMap<String,Set<String>>();
that I have constructed and added its elements from a txt file that has a list's name and a set of names in it.
98298XSD98 N9823923 N123781 N723872 ....
13214FS923 N9818324 N982389
... ...
I made another HashMap, called names_list that pretty much replaces the order of the list_names HashMap such that I can get all the lists that a given name is in.
now the HashMap I have is pretty big, and there are over 400k items and 60k lists.
somewhere in my code im trying to get the Set of different lists many many times and then getting the intersection of these two lists for computational purposes,
a_list = this.names_lists.get(a);
b_list = this.names_lists.get(b);
// printing lists
//intersection stuff
but whats weird is the HashMap didn't recognizance one of its keys(or maybe many of its keys) and treated it as null after one retrieval or sometimes 0 retrievals.
a:0122211029:[R3DDZP35ERRSA] b:1159829805:[R3ALX1GRMY5IMX, R1204YEYK4MBCA]
a:0122211029:[] b:1593072570:[R222JSDL42MS64]
here, im just printing the name and names_list.get(key).toString();
and yes i'm printing these before doing any intersection stuff.
any idea why is it doing that?
When you calculate the intersection of two sets, you actually modify one of the sets. You have to create a temporary set to hold the intersection, e.g.:
a_list = this.names_lists.get(a);
b_list = this.names_lists.get(b);
Set<String> intersection = new HashSet<>(a_list).retainAll(b_list);

Looking for a table-like data structure

I have 2 sets of data.
Let say one is a people, another is a group.
A people can be in multiple groups while a group can have multiple people.
My operations will basically be CRUD on group and people.
As well as a method that makes sure a list of people are in different groups (which gets called alot).
Right now I'm thinking of making a table of binary 0's and 1's with horizontally representing all the people and vertically all the groups.
I can perform the method in O(n) time by adding each list of binaries and compare with the "and" operation of the list of binaries.
E.g
Group A B C D
ppl1 1 0 0 1
ppl2 0 1 1 0
ppl3 0 0 1 0
ppl4 0 1 0 0
check (ppl1, ppl2) = (1001 + 0110) == (1001 & 0110)
= 1111 == 1111
= true
check (ppl2, ppl3) = (0110 + 0010) == (0110+0010)
= 1000 ==0110
= false
I'm wondering if there is a data structure that does something similar already so I don't have to write my own and maintain O(n) runtime.
I don't know all of the details of your problem, but my gut instinct is that you may be over thinking things here. How many objects are you planning on storing in this data structure? If you have really large amounts of data to store here, I would recommend that you use an actual database instead of a data structure. The type of operations you are describing here are classical examples of things that relational databases are good at. MySQL and PostgreSQL are examples of large scale relational databases that could do this sort of thing in their sleep. If you'd like something lighter-weight SQLite would probably be of interest.
If you do not have large amounts of data that you need to store in this data structure, I'd recommend keeping it simple, and only optimizing it when you are sure that it won't be fast enough for what you need to do. As a first shot, I'd just recommend using java's built in List interface to store your people and a Map to store groups. You could do something like this:
// Use a list to keep track of People
List<Person> myPeople = new ArrayList<Person>();
Person steve = new Person("Steve");
myPeople.add(steve);
myPeople.add(new Person("Bob"));
// Use a Map to track Groups
Map<String, List<Person>> groups = new HashMap<String, List<Person>>();
groups.put("Everybody", myPeople);
groups.put("Developers", Arrays.asList(steve));
// Does a group contain everybody?
groups.get("Everybody").containsAll(myPeople); // returns true
groups.get("Developers").containsAll(myPeople); // returns false
This definitly isn't the fastest option available, but if you do not have a huge number of People to keep track of, you probably won't even notice any performance issues. If you do have some special conditions that would make the speed of using regular Lists and Maps unfeasible, please post them and we can make suggestions based on those.
EDIT:
After reading your comments, it appears that I misread your issue on the first run through. It looks like you're not so much interested in mapping groups to people, but instead mapping people to groups. What you probably want is something more like this:
Map<Person, List<String>> associations = new HashMap<Person, List<String>>();
Person steve = new Person("Steve");
Person ed = new Person("Ed");
associations.put(steve, Arrays.asList("Everybody", "Developers"));
associations.put(ed, Arrays.asList("Everybody"));
// This is the tricky part
boolean sharesGroups = checkForSharedGroups(associations, Arrays.asList(steve, ed));
So how do you implement the checkForSharedGroups method? In your case, since the numbers surrounding this are pretty low, I'd just try out the naive method and go from there.
public boolean checkForSharedGroups(
Map<Person, List<String>> associations,
List<Person> peopleToCheck){
List<String> groupsThatHaveMembers = new ArrayList<String>();
for(Person p : peopleToCheck){
List<String> groups = associations.get(p);
for(String s : groups){
if(groupsThatHaveMembers.contains(s)){
// We've already seen this group, so we can return
return false;
} else {
groupsThatHaveMembers.add(s);
}
}
}
// If we've made it to this point, nobody shares any groups.
return true;
}
This method probably doesn't have great performance on large datasets, but it is very easy to understand. Because it's encapsulated in it's own method, it should also be easy to update if it turns out you need better performance. If you do need to increase performance, I would look at overriding the equals method of Person, which would make lookups in the associations map faster. From there you could also look at a custom type instead of String for groups, also with an overridden equals method. This would considerably speed up the contains method used above.
The reason why I'm not too concerned about performance is that the numbers you've mentioned aren't really that big as far as algorithms are concerned. Because this method returns as soon as it finds two matching groups, in the very worse case you will call ArrayList.contains a number of times equal to the number of groups that exist. In the very best case scenario, it only needs to be called twice. Performance will likely only be an issue if you call the checkForSharedGroups very, very often, in which case you might be better off finding a way to call it less often instead of optimizing the method itself.
Have you considered a HashTable? If you know all of the keys you'll be using, it's possible to use a Perfect Hash Function which will allow you to achieve constant time.
How about having two separate entities for People and Group. Inside People have a set of Group and vice versa.
class People{
Set<Group> groups;
//API for addGroup, getGroup
}
class Group{
Set<People> people;
//API for addPeople,getPeople
}
check(People p1, People p2):
1) call getGroup on both p1,p2
2) check the size of both the set,
3) iterate over the smaller set, and check if that group is present in other set(of group)
Now, you can basically store People object in any data structure. Preferably a linked list if size is not fixed otherwise an array.

Share values between several transformations?

Imagine I have the following List of values:
List<String> values = Lists.asList("a", "a", "b", "c");
Now I want to add an index to all values, so that one ends up with this as list:
a1 a2 b1 c1 // imagine numbers as subscript
I want to use a FluentIterable and its transform method for that, so something like this:
from(values).transform(addIndexFunction);
The problem with that is, that addIndexFunction needs to know, how often the index was already increased - think of a2, when adding the index to this a, the function needs to know, that there was alraedy an a1.
So, is there some kind of best practice for doing such a thing? My current idea is to create a Map with each letter as key, so:
Map<String,Integer> counters = new HashMap<>();
// the following should be generated automatically, but for the sake of this example it's done manually...
counters.put("a", 0);
counters.put("b", 0);
counters.put("c", 0);
and then modify my transform call:
from(values).transform(addIndexFunction(counters));
As Map is an object and passed by reference, I can now share the counter state between the transformations, right? Feedback, better ideas? Is there some built-in mechanism for such things in Guava?
Thanks for any hint!
Use a Multiset to replace the HashMap, and you're good to go, following #Perception's suggestion to encapsulate the Multiset in the Function itself and aggregating data as the function is applied.
Don't use transform here, or your iterable will have different values every time you iterate over it, and will generally behave very weirdly. (It's also somewhat frowned upon to have state in a Function.)
Instead, do a proper for loop with a Multiset helper:
Multiset<String> counts = HashMultiset.create();
List<Subscript> result = Lists.newArrayList();
for (String value : values) {
int count = counts.add(value, 1);
result.add(new Subscript(value, count));
}

Adding elements into ArrayList at position larger than the current size

Currently I'm using an ArrayList to store a list of elements, whereby I will need to insert new elements at specific positions. There is a need for me to enter elements at a position larger than the current size. For e.g:
ArrayList<String> arr = new ArrayList<String>();
arr.add(3,"hi");
Now I already know there will be an OutOfBoundsException. Is there another way or another object where I can do this while still keeping the order? This is because I have methods that finds elements based on their index. For e.g.:
ArrayList<String> arr = new ArrayList<String>();
arr.add("hi");
arr.add(0,"hello");
I would expect to find "hi" at index 1 instead of index 0 now.
So in summary, short of manually inserting null into the elements in-between, is there any way to satisfy these two requirements:
Insert elements into position larger than current size
Push existing elements to the right when I insert elements in the middle of the list
I've looked at Java ArrayList add item outside current size, as well as HashMap, but HashMap doesn't satisfy my second criteria. Any help would be greatly appreciated.
P.S. Performance is not really an issue right now.
UPDATE: There have been some questions on why I have these particular requirements, it is because I'm working on operational transformation, where I'm inserting a set of operations into, say, my list (a math formula). Each operation contains a string. As I insert/delete strings into my list, I will dynamically update the unapplied operations (if necessary) through the tracking of each operation that has already been applied. My current solution now is to use a subclass of ArrayList and override some of the methods. I would certainly like to know if there is a more elegant way of doing so though.
Your requirements are contradictory:
... I will need to insert new elements at specific positions.
There is a need for me to enter elements at a position larger than the current size.
These imply that positions are stable; i.e. that an element at a given position remains at that position.
I would expect to find "hi" at index 1 instead of index 0 now.
This states that positions are not stable under some circumstances.
You really need to make up your mind which alternative you need.
If you must have stable positions, use a TreeMap or HashMap. (A TreeMap allows you to iterate the keys in order, but at the cost of more expensive insertion and lookup ... for a large collection.) If necessary, use a "position" key type that allows you to "always" generate a new key that goes between any existing pair of keys.
If you don't have to have stable positions, use an ArrayList, and deal with the case where you have to insert beyond the end position using append.
I fail to see how it is sensible for positions to be stable if you insert beyond the end, and allow instability if you insert in the middle. (Besides, the latter is going to make the former unstable eventually ...)
even you can use TreeMap for maintaining order of keys.
First and foremost, I would say use Map instead of List. I guess your problem can be solved in better way if you use Map. But in any case if you really want to do this with Arraylist
ArrayList<String> a = new ArrayList<String>(); //Create empty list
a.addAll(Arrays.asList( new String[100])); // add n number of strings, actually null . here n is 100, but you will have to decide the ideal value of this, depending upon your requirement.
a.add(7,"hello");
a.add(2,"hi");
a.add(1,"hi2");
Use Vector class to solve this issue.
Vector vector = new Vector();
vector.setSize(100);
vector.set(98, "a");
When "setSize" is set to 100 then all 100 elements gets initialized with null values.
For those who are still dealing with this, you may do it like this.
Object[] array= new Object[10];
array[0]="1";
array[3]= "3";
array[2]="2";
array[7]="7";
List<Object> list= Arrays.asList(array);
But the thing is you need to identify the total size first, this should be just a comment but I do not have much reputation to do that.

Categories