I have a complicated requirement where a list records has comments in it. We have a functionality of reporting where each and every change should be logged and reported. Hence as per our design, we create a whole new record even if a single field has been updated.
Now we wanted to get history of comments(reversed sorted by timestamp) stored in our db. After running query I got the list of comments but it contains duplicate entries because some other field was changed. It also contains null entries.
I wrote the following code to remove duplicate and null entries.
List<Comment> toRet = new ArrayList<>();
dbCommentHistory.forEach(ele -> {
//Directly copy if toRet is empty.
if (!toRet.isEmpty()) {
int lastIndex = toRet.size() - 1;
Comment lastAppended = toRet.get(lastIndex);
// If comment is null don't proceed
if (ele.getComment() == null) {
return;
}
// remove if we have same comment as last time
if (StringUtils.compare(ele.getComment(), lastAppended.getComment()) == 0) {
toRet.remove(lastIndex);
}
}
//add element to new list
toRet.add(ele);
});
This logic works fine and have been tested now, But I want to convert this code to use lambda, streams and other java 8's feature.
You can use the following snippet:
Collection<Comment> result = dbCommentHistory.stream()
.filter(c -> c.getComment() != null)
.collect(Collectors.toMap(Comment::getComment, Function.identity(), (first, second) -> second, LinkedHashMap::new))
.values();
If you need a List instead of a Collection you can use new ArrayList<>(result).
If you have implemented the equals() method in your Comment class like the following
#Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
return Objects.equals(comment, ((Comment) o).comment);
}
you can just use this snippet:
List<Comment> result = dbCommentHistory.stream()
.filter(c -> c.getComment() != null)
.distinct()
.collect(Collectors.toList());
But this would keep the first comment, not the last.
If I'm understanding the logic in the question code you want to remove consecutive repeated comments but keep duplicates if there is some different comment in between in the input list.
In this case a simply using .distinct() (and once equals and hashCode) has been properly defined, won't work as intended as non-consecutive duplicates will be eliminated as well.
The more "streamy" solution here is to use a custom Collector that when folding elements into the accumulator removes the consecutive duplicates only.
static final Collector<Comment, List<Comment>, List<Comment>> COMMENT_COLLECTOR = Collector.of(
ArrayDeque::new, //// supplier.
(list, comment) -> { /// folder
if (list.isEmpty() || !Objects.equals(list.getLast().getComment(), comment.getComment()) {
list.addLast(comment);
}
}),
(list1, list2) -> { /// the combiner. we discard list2 first element if identical to last on list1.
if (list1.isEmpty()) {
return list2;
} else {
if (!list2.isEmpty()) {
if (!Objects.equals(list1.getLast().getComment(),
list2.getFirst().getComment()) {
list1.addAll(list2);
} else {
list1.addAll(list2.subList(1, list2.size());
}
}
return list1;
}
});
Notice that Deque (in java.util.*) is an extended type of List that have convenient operations to access the first and last element of the list. ArrayDeque is the nacked array based implementation (equivalent to ArrayList to List).
By default the collector will always receive the elements in the input stream order so this must work. I know it is not much less code but it is as good as it gets. If you define a Comment comparator static method that can handle null elements or comment with grace you can make it a bit more compact:
static boolean sameComment(final Comment a, final Comment b) {
if (a == b) {
return true;
} else if (a == null || b == null) {
return false;
} else {
Objects.equals(a.getComment(), b.getComment());
}
}
static final Collector<Comment, List<Comment>, List<Comment>> COMMENT_COLLECTOR = Collector.of(
ArrayDeque::new, //// supplier.
(list, comment) -> { /// folder
if (!sameComment(list.peekLast(), comment) {
list.addLast(comment);
}
}),
(list1, list2) -> { /// the combiner. we discard list2 first element if identical to last on list1.
if (list1.isEmpty()) {
return list2;
} else {
if (!sameComment(list1.peekLast(), list2.peekFirst()) {
list1.addAll(list2);
} else {
list1.addAll(list2.subList(1, list2.size());
}
return list1;
}
});
----------
Perhaps you would prefer to declare a proper (named) class that implements the Collector to make it more clear and avoid the definition of lambdas for each Collector action. or at least implement the lambdas passed to Collector.of by static methods to improve readability.
Now the code to do the actual work is rather trivial:
List<Comment> unique = dbCommentHistory.stream()
.collect(COMMENT_COLLECTOR);
That is it. However if it may become a bit more involved if you want to handle null comments (element) instances. The code above already handles the comment's string being null by considering it equals to another null string:
List<Comment> unique = dbCommentHistory.stream()
.filter(Objects::nonNull)
.collect(COMMENT_COLLECTOR);
Your code can be simplified a bit. Notice that this solution does not use stream/lambdas but it seems to be the most succinct option:
List<Comment> toRet = new ArrayList<>(dbCommentHistory.size());
Comment last = null;
for (final Comment ele : dbCommentHistory) {
if (ele != null && (last == null || !Objects.equals(last.getComment(), ele.getComment()))) {
toRet.add(last = ele);
}
}
The outcome is not exactly the same as the question code as in the latter null elements might be added to the toRet but it seems to me that you actually may want to remove the completely instead. Is easy to modify the code (make it a bit longer) to get the same output though.
If you insist in using a .forEach that would not be that difficult, in that case last whould need to be calculated at the beggining of the lambda. In this case you may want to use a ArrayDeque so that you can coveniently use peekLast:
Deque<Comment> toRet = new ArrayDeque<>(dbCommentHistory.size());
dbCommentHistory.forEach( ele -> {
if (ele != null) {
final Comment last = toRet.peekLast();
if (last == null || !Objects.equals(last.getComment(), ele.getComment())) {
toRet.addLast(ele);
}
}
});
Related
I have one "matching algorithm" method that wrote using loops and if conditions.
If it possible (and if it needed) to rewrite this code in Java 8 style?
private boolean matchIdsAndStatuses(List<Item> items, ItemResponse currentItemResponse, StatusValue statusValue) {
boolean isMatched = false;
if (CollectionUtils.isNonEmpty(items)) {
// Set of currentItemResponse ids
Set<Map.Entry<String, Status>> itemIds = currentItemResponse.getMapIdsAndStatuses().entrySet();
// List of inner items ids
List<String> innerItemIds =
items.stream().map(ItemBase::getInnerId).collect(Collectors.toList());
// Do we rewrite following block of code in Java 8 style?
// Iterate through inner items ids
for (String innerItemId: innerItemIds) {
// Iterate through currentItemResponse ids
for (Map.Entry<String, Status> itemId: itemIds) {
// Check if innerItemIds and statuses were matched
if (Objects.equals(innerItemId, itemId.getKey())
&& itemId.getValue().getStatusValue().equals(status)) {
isMatched = true;
break;
} else {
isMatched = false;
}
}
}
}
return isMatched;
}
Thank you.
If I understand correctly, you want to check that each of items is mapped to the given status in currentItemResponse.getMapIdsAndStatuses(). I think this will do what you want:
private boolean matchIdsAndStatuses(List<Item> items, ItemResponse currentItemResponse, StatusValue statusValue) {
return items.stream()
.map(ItemBase::getInnerId)
.map(currentItemResponse.getMapIdsAndStatuses()::get)
.filter(Objects::nonNull)
.map(Status::getStatusValue)
.filter(statusValue::equals)
.count() == items.size();
}
On second thought, I would recommend using instead the short-circuiting allMatch() operation. This will stop iterating as soon as a non-match is found:
return items.stream()
.map(ItemBase::getInnerId)
.map(currentItemResponse.getMapIdsAndStatuses()::get)
.allMatch(s -> s != null && s.getStatusValue().equals(status));
It is required to iterate through items and check if their ID matches the ItemResponse. In addition the item status has to be checked against the parameter. In order to simplify things the example uses different types, but the end result might look similar to the following:
private boolean matchIdsAndStatuses(List<String> items, Map<String, String> currentItemResponse, String status) {
return items.stream()
.map(innerItemId -> innerItemId)
.anyMatch(innerItemId -> currentItemResponse.keySet().contains(innerItemId)
&& currentItemResponse.get(innerItemId).contains(status));
}
First we extract the innerItemId, then we check if thats a valid key and finally we fetch the status by key and compare it with the parameter.
I have the following code that I want to translate to Java 8 streams:
public ReleaseResult releaseReources() {
List<String> releasedNames = new ArrayList<>();
Stream<SomeResource> stream = this.someResources();
Iterator<SomeResource> it = stream.iterator();
while (it.hasNext() && releasedNames.size() < MAX_TO_RELEASE) {
SomeResource resource = it.next();
if (!resource.isTaken()) {
resource.release();
releasedNames.add(resource.getName());
}
}
return new ReleaseResult(releasedNames, it.hasNext(), MAX_TO_RELEASE);
}
Method someResources() returns a Stream<SomeResource> and ReleaseResult class is as follows:
public class ReleaseResult {
private int releasedCount;
private List<String> releasedNames;
private boolean hasMoreItems;
private int releaseLimit;
public ReleaseResult(List<String> releasedNames,
boolean hasMoreItems, int releaseLimit) {
this.releasedNames = releasedNames;
this.releasedCount = releasedNames.size();
this.hasMoreItems = hasMoreItems;
this.releaseLimit = releaseLimit;
}
// getters & setters
}
My attempt so far:
public ReleaseResult releaseReources() {
List<String> releasedNames = this.someResources()
.filter(resource -> !resource.isTaken())
.limit(MAX_TO_RELEASE)
.peek(SomeResource::release)
.map(SomeResource::getName)
.collect(Collectors.toList());
return new ReleasedResult(releasedNames, ???, MAX_TO_RELEASE);
}
The problem is that I can't find a way to know if there are pending resources to process. I've thought of using releasedNames.size() == MAX_TO_RELEASE, but this doesn't take into account the case where the stream of resources has exactly MAX_TO_RELEASE elements.
Is there a way to do the same with Java 8 streams?
Note: I'm not looking for answers like "you don't have to do everything with streams" or "using loops and iterators is fine". I'm OK if using an iterator and a loop is the only way or just the best way. It's just that I'd like to know if there's a non-murky way to do the same.
Since you don’t wanna hear that you don’t need streams for everything and loops and iterators are fine, let’s demonstrate it by showing a clean solution, not relying on peek:
public ReleaseResult releaseReources() {
return this.someResources()
.filter(resource -> !resource.isTaken())
.limit(MAX_TO_RELEASE+1)
.collect(
() -> new ReleaseResult(new ArrayList<>(), false, MAX_TO_RELEASE),
(result, resource) -> {
List<String> names = result.getReleasedNames();
if(names.size() == MAX_TO_RELEASE) result.setHasMoreItems(true);
else {
resource.release();
names.add(resource.getName());
}
},
(r1, r2) -> {
List<String> names = r1.getReleasedNames();
names.addAll(r2.getReleasedNames());
if(names.size() > MAX_TO_RELEASE) {
r1.setHasMoreItems(true);
names.remove(MAX_TO_RELEASE);
}
}
);
}
This assumes that // getters & setters includes getters and setters for all non-final fields of your ReleaseResult. And that getReleasedNames() returns the list by reference. Otherwise you would have to rewrite it to provide a specialized Collector having special non-public access to ReleaseResult (implementing another builder type or temporary storage would be an unnecessary complication, it looks like ReleaseResult is already designed exactly for that use case).
We could conclude that for any nontrivial loop code that doesn’t fit into the stream’s intrinsic operations, you can find a collector solution that basically does the same as the loop in its accumulator function, but suffers from the requirement of always having to provide a combiner function. Ok, in this case we can prepend a filter(…).limit(…) so it’s not that bad…
I just noticed, if you ever dare to use that with a parallel stream, you need a way to reverse the effect of releasing the last element in the combiner in case the combined size exceeds MAX_TO_RELEASE. Generally, limits and parallel processing never play well.
I don't think there's a nice way to do this. I've found a hack that does it lazily. What you can do is convert the Stream to an Iterator, convert the Iterator back to another Stream, do the Stream operations, then finally test the Iterator for a next element!
Iterator<SomeResource> it = this.someResource().iterator();
List<String> list = StreamSupport.stream(Spliterators.spliteratorUnknownSize(it, Spliterator.ORDERED), false)
.filter(resource -> !resource.isTaken())
.limit(MAX_TO_RELEASE)
.peek(SomeResource::release)
.map(SomeResource::getName)
.collect(Collectors.toList());
return new ReleaseResult(list, it.hasNext(), MAX_TO_RELEASE);
The only thing I can think of is
List<SomeResource> list = someResources(); // A List, rather than a Stream, is required
List<Integer> indices = IntStream.range(0, list.size())
.filter(i -> !list.get(i).isTaken())
.limit(MAX_TO_RELEASE)
.collect(Collectors.toList());
List<String> names = indices.stream()
.map(list::get)
.peek(SomeResource::release)
.map(SomeResource::getName)
.collect(Collectors.toList());
Then (I think) there are unprocessed elements if
names.size() == MAX_TO_RELEASE
&& (indices.isEmpty() || indices.get(indices.size() - 1) < list.size() - 1)
In Java 7, if I want to get the last not null element of a list, I write something like this:
public CustomObject getLastObject(List<CustomObject> list) {
for (int index = list.size() - 1; index > 0; index--) {
if (list.get(index) != null) {
return list.get(index);
}
}
// handling of case when all elements are null
// or list is empty
...
}
I want to write a shorter code by using lambdas or another feature of Java 8. For example, if I want to get the first not null element I can write this:
public void someMethod(List<CustomObject> list) {
.....
CustomObject object = getFirstObject(list).orElseGet(/*handle this case*/);
.....
}
public Optional<CustomObject> getFirstObject(List<CustomObject> list) {
return list.stream().filter(object -> object != null).findFirst();
}
Maybe someone know how to solve this problem?
A possible solution would be to iterate over the List in reverse order and keep the first non null element:
public Optional<CustomObject> getLastObject(List<CustomObject> list) {
return IntStream.range(0, list.size()).mapToObj(i -> list.get(list.size() - i - 1))
.filter(Objects::nonNull)
.findFirst();
}
Note that there is no findLast method in the Stream API because a Stream is not necessarily ordered or finite.
Another solution is to iterate over the list and reduce it by keeping only the current element. This effectively reduces the Stream to the last element.
public Optional<CustomObject> getLastObject(List<CustomObject> list) {
return list.stream().filter(Objects::nonNull).reduce((a, b) -> b);
}
I am using LinkedList data structure serverList to store the elements in it. As of now, it can also insert null in the LinkedList serverList which is not what I want. Is there any other data structure which I can use which will not add null element in the serverList list but maintain the insert ordering?
public List<String> getServerNames(ProcessData dataHolder) {
// some code
String localIP = getLocalIP(localPath, clientId);
String localAddress = getLocalAddress(localPath, clientId);
// some code
List<String> serverList = new LinkedList<String>();
serverList.add(localIP);
if (ppFlag) {
serverList.add(localAddress);
}
if (etrFlag) {
for (String remotePath : holderPath) {
String remoteIP = getRemoteIP(remotePath, clientId);
String remoteAddress = getRemoteAddress(remotePath, clientId);
serverList.add(remoteIP);
if (ppFlag) {
serverList.add(remoteAddress);
}
}
}
return serverList;
}
This method will return a List which I am iterating it in a for loop in normal way. I can have empty serverList if everything is null, instead of having four null values in my list. In my above code, getLocalIP, getLocalAddress, getRemoteIP and getRemoteAddress can return null and then it will add null element in the linked list. I know I can add a if check but then I need to add if check four time just before adding to Linked List. Is there any better data structure which I can use here?
One constraint I have is - This library is use under very heavy load so this code has to be fast since it will be called multiple times.
I am using LinkedList data structure serverList to store the elements in it.
That's most probably wrong, given that you're aiming at speed. An ArrayList is much faster unless you're using it as a Queue or alike.
I know I can add a if check but then I need to add if check four time just before adding to Linked List. Is there any better data structure which I can use here?
A collection silently ignoring nulls would be a bad idea. It may be useful sometimes and very surprising at other times. Moreover, it'd violate the List.add contract. So you won't find it in any serious library and you shouldn't implement it.
Just write a method
void <E> addIfNotNullTo(Collection<E> collection, E e) {
if (e != null) {
collection.add(e);
}
}
and use it. It won't make your code really shorter, but it'll make it clearer.
One constraint I have is - This library is use under very heavy load so this code has to be fast since it will be called multiple times.
Note that any IO is many orders of magnitude slower than simple list operations.
Use Apache Commons Collection:
ListUtils.predicatedList(new ArrayList(), PredicateUtils.notNullPredicate());
Adding null to this list throws IllegalArgumentException. Furthermore you can back it by any List implementation you like and if necessary you can add more Predicates to be checked.
Same exists for Collections in general.
There are data structures that do not allow null elements, such as ArrayDeque, but these will throw an exception rather than silently ignore a null element, so you'd have to check for null before insertion anyway.
If you're dead set against adding null checks before insertion, you could instead iterate over the list and remove null elements before you return it.
The simplest way would be to just override LinkedList#add() in your getServerNames() method.
List<String> serverList = new LinkedList<String>() {
public boolean add(String item) {
if (item != null) {
super.add(item);
return true;
} else
return false;
}
};
serverList.add(null);
serverList.add("NotNULL");
System.out.println(serverList.size()); // prints 1
If you then see yourself using this at several places, you can probably turn it into a class.
You can use a plain Java HashSet to store your paths. The null value may be added multiple times, but it will only ever appears once in the Set. You can remove null from the Set and then convert to an ArrayList before returning.
Set<String> serverSet = new HashSet<String>();
serverSet.add(localIP);
if (ppFlag) {
serverSet.add(localAddress);
}
if (etrFlag) {
for (String remotePath : holderPath) {
String remoteIP = getRemoteIP(remotePath, clientId);
String remoteAddress = getRemoteAddress(remotePath, clientId);
serverSet.add(remoteIP);
if (ppFlag) {
serverSet.add(remoteAddress);
}
}
}
serverSet.remove(null); // remove null from your set - no exception if null not present
List<String> serverList = new ArrayList<String>(serverSet);
return serverList;
Since you use Guava (it's tagged), I have this alternative if you have the luxury of being able to return a Collection instead of a List.
Why Collection ? Because List forces you to either return true or throw an exception. Collection allows you to return false if you didn't add anything to it.
class MyVeryOwnList<T> extends ForwardingCollection<T> { // Note: not ForwardingList
private final List<T> delegate = new LinkedList<>(); // Keep a linked list
#Override protected Collection<T> delegate() { return delegate; }
#Override public boolean add(T element) {
if (element == null) {
return false;
} else {
return delegate.add(element);
}
}
#Override public boolean addAll(Collection<? extends T> elements) {
return standardAddAll(elements);
}
}
I have two collections of the same object, Collection<Foo> oldSet and Collection<Foo> newSet. The required logic is as follow:
if foo is in(*) oldSet but not newSet, call doRemove(foo)
else if foo is not in oldSet but in newSet, call doAdd(foo)
else if foo is in both collections but modified, call doUpdate(oldFoo, newFoo)
else if !foo.activated && foo.startDate >= now, call doStart(foo)
else if foo.activated && foo.endDate <= now, call doEnd(foo)
(*) "in" means the unique identifier matches, not necessarily the content.
The current (legacy) code does many comparisons to figure out removeSet, addSet, updateSet, startSet and endSet, and then loop to act on each item.
The code is quite messy (partly because I have left out some spaghetti logic already) and I am trying to refactor it. Some more background info:
As far as I know, the oldSet and newSet are actually backed by ArrayList
Each set contains less than 100 items, most likely max out at 20
This code is called frequently (measured in millions/day), although the sets seldom differ
My questions:
If I convert oldSet and newSet into HashMap<Foo> (order is not of concern here), with the IDs as keys, would it made the code easier to read and easier to compare? How much of time & memory performance is loss on the conversion?
Would iterating the two sets and perform the appropriate operation be more efficient and concise?
Apache's commons.collections library has a CollectionUtils class that provides easy-to-use methods for Collection manipulation/checking, such as intersection, difference, and union.
The org.apache.commons.collections.CollectionUtils API docs are here.
You can use Java 8 streams, for example
set1.stream().filter(s -> set2.contains(s)).collect(Collectors.toSet());
or Sets class from Guava:
Set<String> intersection = Sets.intersection(set1, set2);
Set<String> difference = Sets.difference(set1, set2);
Set<String> symmetricDifference = Sets.symmetricDifference(set1, set2);
Set<String> union = Sets.union(set1, set2);
I have created an approximation of what I think you are looking for just using the Collections Framework in Java. Frankly, I think it is probably overkill as #Mike Deck points out. For such a small set of items to compare and process I think arrays would be a better choice from a procedural standpoint but here is my pseudo-coded (because I'm lazy) solution. I have an assumption that the Foo class is comparable based on it's unique id and not all of the data in it's contents:
Collection<Foo> oldSet = ...;
Collection<Foo> newSet = ...;
private Collection difference(Collection a, Collection b) {
Collection result = a.clone();
result.removeAll(b)
return result;
}
private Collection intersection(Collection a, Collection b) {
Collection result = a.clone();
result.retainAll(b)
return result;
}
public doWork() {
// if foo is in(*) oldSet but not newSet, call doRemove(foo)
Collection removed = difference(oldSet, newSet);
if (!removed.isEmpty()) {
loop removed {
Foo foo = removedIter.next();
doRemove(foo);
}
}
//else if foo is not in oldSet but in newSet, call doAdd(foo)
Collection added = difference(newSet, oldSet);
if (!added.isEmpty()) {
loop added {
Foo foo = addedIter.next();
doAdd(foo);
}
}
// else if foo is in both collections but modified, call doUpdate(oldFoo, newFoo)
Collection matched = intersection(oldSet, newSet);
Comparator comp = new Comparator() {
int compare(Object o1, Object o2) {
Foo f1, f2;
if (o1 instanceof Foo) f1 = (Foo)o1;
if (o2 instanceof Foo) f2 = (Foo)o2;
return f1.activated == f2.activated ? f1.startdate.compareTo(f2.startdate) == 0 ? ... : f1.startdate.compareTo(f2.startdate) : f1.activated ? 1 : 0;
}
boolean equals(Object o) {
// equal to this Comparator..not used
}
}
loop matched {
Foo foo = matchedIter.next();
Foo oldFoo = oldSet.get(foo);
Foo newFoo = newSet.get(foo);
if (comp.compareTo(oldFoo, newFoo ) != 0) {
doUpdate(oldFoo, newFoo);
} else {
//else if !foo.activated && foo.startDate >= now, call doStart(foo)
if (!foo.activated && foo.startDate >= now) doStart(foo);
// else if foo.activated && foo.endDate <= now, call doEnd(foo)
if (foo.activated && foo.endDate <= now) doEnd(foo);
}
}
}
As far as your questions:
If I convert oldSet and newSet into HashMap (order is not of concern here), with the IDs as keys, would it made the code easier to read and easier to compare? How much of time & memory performance is loss on the conversion?
I think that you would probably make the code more readable by using a Map BUT...you would probably use more memory and time during the conversion.
Would iterating the two sets and perform the appropriate operation be more efficient and concise?
Yes, this would be the best of both worlds especially if you followed #Mike Sharek 's advice of Rolling your own List with the specialized methods or following something like the Visitor Design pattern to run through your collection and process each item.
I think the easiest way to do that is by using apache collections api - CollectionUtils.subtract(list1,list2) as long the lists are of the same type.
I'd move to lists and solve it this way:
Sort both lists by id ascending using custom Comparator if objects in lists aren't Comparable
Iterate over elements in both lists like in merge phase in merge sort algorithm, but instead of merging lists, you check your logic.
The code would be more or less like this:
/* Main method */
private void execute(Collection<Foo> oldSet, Collection<Foo> newSet) {
List<Foo> oldList = asSortedList(oldSet);
List<Foo> newList = asSortedList(newSet);
int oldIndex = 0;
int newIndex = 0;
// Iterate over both collections but not always in the same pace
while( oldIndex < oldList.size()
&& newIndex < newIndex.size()) {
Foo oldObject = oldList.get(oldIndex);
Foo newObject = newList.get(newIndex);
// Your logic here
if(oldObject.getId() < newObject.getId()) {
doRemove(oldObject);
oldIndex++;
} else if( oldObject.getId() > newObject.getId() ) {
doAdd(newObject);
newIndex++;
} else if( oldObject.getId() == newObject.getId()
&& isModified(oldObject, newObject) ) {
doUpdate(oldObject, newObject);
oldIndex++;
newIndex++;
} else {
...
}
}// while
// Check if there are any objects left in *oldList* or *newList*
for(; oldIndex < oldList.size(); oldIndex++ ) {
doRemove( oldList.get(oldIndex) );
}// for( oldIndex )
for(; newIndex < newList.size(); newIndex++ ) {
doAdd( newList.get(newIndex) );
}// for( newIndex )
}// execute( oldSet, newSet )
/** Create sorted list from collection
If you actually perform any actions on input collections than you should
always return new instance of list to keep algorithm simple.
*/
private List<Foo> asSortedList(Collection<Foo> data) {
List<Foo> resultList;
if(data instanceof List) {
resultList = (List<Foo>)data;
} else {
resultList = new ArrayList<Foo>(data);
}
Collections.sort(resultList)
return resultList;
}
public static boolean doCollectionsContainSameElements(
Collection<Integer> c1, Collection<Integer> c2){
if (c1 == null || c2 == null) {
return false;
}
else if (c1.size() != c2.size()) {
return false;
} else {
return c1.containsAll(c2) && c2.containsAll(c1);
}
}
For a set that small is generally not worth it to convert from an Array to a HashMap/set. In fact, you're probably best off keeping them in an array and then sorting them by key and iterating over both lists simultaneously to do the comparison.
For comaparing a list or set we can use Arrays.equals(object[], object[]). It will check for the values only. To get the Object[] we can use Collection.toArray() method.