Related
I want to filter a java.util.Collection based on a predicate.
Java 8 (2014) solves this problem using streams and lambdas in one line of code:
List<Person> beerDrinkers = persons.stream()
.filter(p -> p.getAge() > 16).collect(Collectors.toList());
Here's a tutorial.
Use Collection#removeIf to modify the collection in place. (Notice: In this case, the predicate will remove objects who satisfy the predicate):
persons.removeIf(p -> p.getAge() <= 16);
lambdaj allows filtering collections without writing loops or inner classes:
List<Person> beerDrinkers = select(persons, having(on(Person.class).getAge(),
greaterThan(16)));
Can you imagine something more readable?
Disclaimer: I am a contributor on lambdaj
Assuming that you are using Java 1.5, and that you cannot add Google Collections, I would do something very similar to what the Google guys did. This is a slight variation on Jon's comments.
First add this interface to your codebase.
public interface IPredicate<T> { boolean apply(T type); }
Its implementers can answer when a certain predicate is true of a certain type. E.g. If T were User and AuthorizedUserPredicate<User> implements IPredicate<T>, then AuthorizedUserPredicate#apply returns whether the passed in User is authorized.
Then in some utility class, you could say
public static <T> Collection<T> filter(Collection<T> target, IPredicate<T> predicate) {
Collection<T> result = new ArrayList<T>();
for (T element: target) {
if (predicate.apply(element)) {
result.add(element);
}
}
return result;
}
So, assuming that you have the use of the above might be
Predicate<User> isAuthorized = new Predicate<User>() {
public boolean apply(User user) {
// binds a boolean method in User to a reference
return user.isAuthorized();
}
};
// allUsers is a Collection<User>
Collection<User> authorizedUsers = filter(allUsers, isAuthorized);
If performance on the linear check is of concern, then I might want to have a domain object that has the target collection. The domain object that has the target collection would have filtering logic for the methods that initialize, add and set the target collection.
UPDATE:
In the utility class (let's say Predicate), I have added a select method with an option for default value when the predicate doesn't return the expected value, and also a static property for params to be used inside the new IPredicate.
public class Predicate {
public static Object predicateParams;
public static <T> Collection<T> filter(Collection<T> target, IPredicate<T> predicate) {
Collection<T> result = new ArrayList<T>();
for (T element : target) {
if (predicate.apply(element)) {
result.add(element);
}
}
return result;
}
public static <T> T select(Collection<T> target, IPredicate<T> predicate) {
T result = null;
for (T element : target) {
if (!predicate.apply(element))
continue;
result = element;
break;
}
return result;
}
public static <T> T select(Collection<T> target, IPredicate<T> predicate, T defaultValue) {
T result = defaultValue;
for (T element : target) {
if (!predicate.apply(element))
continue;
result = element;
break;
}
return result;
}
}
The following example looks for missing objects between collections:
List<MyTypeA> missingObjects = (List<MyTypeA>) Predicate.filter(myCollectionOfA,
new IPredicate<MyTypeA>() {
public boolean apply(MyTypeA objectOfA) {
Predicate.predicateParams = objectOfA.getName();
return Predicate.select(myCollectionB, new IPredicate<MyTypeB>() {
public boolean apply(MyTypeB objectOfB) {
return objectOfB.getName().equals(Predicate.predicateParams.toString());
}
}) == null;
}
});
The following example, looks for an instance in a collection, and returns the first element of the collection as default value when the instance is not found:
MyType myObject = Predicate.select(collectionOfMyType, new IPredicate<MyType>() {
public boolean apply(MyType objectOfMyType) {
return objectOfMyType.isDefault();
}}, collectionOfMyType.get(0));
UPDATE (after Java 8 release):
It's been several years since I (Alan) first posted this answer, and I still cannot believe I am collecting SO points for this answer. At any rate, now that Java 8 has introduced closures to the language, my answer would now be considerably different, and simpler. With Java 8, there is no need for a distinct static utility class. So if you want to find the 1st element that matches your predicate.
final UserService userService = ... // perhaps injected IoC
final Optional<UserModel> userOption = userCollection.stream().filter(u -> {
boolean isAuthorized = userService.isAuthorized(u);
return isAuthorized;
}).findFirst();
The JDK 8 API for optionals has the ability to get(), isPresent(), orElse(defaultUser), orElseGet(userSupplier) and orElseThrow(exceptionSupplier), as well as other 'monadic' functions such as map, flatMap and filter.
If you want to simply collect all the users which match the predicate, then use the Collectors to terminate the stream in the desired collection.
final UserService userService = ... // perhaps injected IoC
final List<UserModel> userOption = userCollection.stream().filter(u -> {
boolean isAuthorized = userService.isAuthorized(u);
return isAuthorized;
}).collect(Collectors.toList());
See here for more examples on how Java 8 streams work.
Use CollectionUtils.filter(Collection,Predicate), from Apache Commons.
"Best" way is too wide a request. Is it "shortest"? "Fastest"? "Readable"?
Filter in place or into another collection?
Simplest (but not most readable) way is to iterate it and use Iterator.remove() method:
Iterator<Foo> it = col.iterator();
while( it.hasNext() ) {
Foo foo = it.next();
if( !condition(foo) ) it.remove();
}
Now, to make it more readable, you can wrap it into a utility method. Then invent a IPredicate interface, create an anonymous implementation of that interface and do something like:
CollectionUtils.filterInPlace(col,
new IPredicate<Foo>(){
public boolean keepIt(Foo foo) {
return foo.isBar();
}
});
where filterInPlace() iterate the collection and calls Predicate.keepIt() to learn if the instance to be kept in the collection.
I don't really see a justification for bringing in a third-party library just for this task.
Consider Google Collections for an updated Collections framework that supports generics.
UPDATE: The google collections library is now deprecated. You should use the latest release of Guava instead. It still has all the same extensions to the collections framework including a mechanism for filtering based on a predicate.
Wait for Java 8:
List<Person> olderThan30 =
//Create a Stream from the personList
personList.stream().
//filter the element to select only those with age >= 30
filter(p -> p.age >= 30).
//put those filtered elements into a new List.
collect(Collectors.toList());
Since the early release of Java 8, you could try something like:
Collection<T> collection = ...;
Stream<T> stream = collection.stream().filter(...);
For example, if you had a list of integers and you wanted to filter the numbers that are > 10 and then print out those numbers to the console, you could do something like:
List<Integer> numbers = Arrays.asList(12, 74, 5, 8, 16);
numbers.stream().filter(n -> n > 10).forEach(System.out::println);
I'll throw RxJava in the ring, which is also available on Android. RxJava might not always be the best option, but it will give you more flexibility if you wish add more transformations on your collection or handle errors while filtering.
Observable.from(Arrays.asList(1, 2, 3, 4, 5))
.filter(new Func1<Integer, Boolean>() {
public Boolean call(Integer i) {
return i % 2 != 0;
}
})
.subscribe(new Action1<Integer>() {
public void call(Integer i) {
System.out.println(i);
}
});
Output:
1
3
5
More details on RxJava's filter can be found here.
The setup:
public interface Predicate<T> {
public boolean filter(T t);
}
void filterCollection(Collection<T> col, Predicate<T> predicate) {
for (Iterator i = col.iterator(); i.hasNext();) {
T obj = i.next();
if (predicate.filter(obj)) {
i.remove();
}
}
}
The usage:
List<MyObject> myList = ...;
filterCollection(myList, new Predicate<MyObject>() {
public boolean filter(MyObject obj) {
return obj.shouldFilter();
}
});
How about some plain and straighforward Java
List<Customer> list ...;
List<Customer> newList = new ArrayList<>();
for (Customer c : list){
if (c.getName().equals("dd")) newList.add(c);
}
Simple, readable and easy (and works in Android!)
But if you're using Java 8 you can do it in a sweet one line:
List<Customer> newList = list.stream().filter(c -> c.getName().equals("dd")).collect(toList());
Note that toList() is statically imported
Are you sure you want to filter the Collection itself, rather than an iterator?
see org.apache.commons.collections.iterators.FilterIterator
or using version 4 of apache commons org.apache.commons.collections4.iterators.FilterIterator
Since java 9 Collectors.filtering is enabled:
public static <T, A, R>
Collector<T, ?, R> filtering(Predicate<? super T> predicate,
Collector<? super T, A, R> downstream)
Thus filtering should be:
collection.stream().collect(Collectors.filtering(predicate, collector))
Example:
List<Integer> oddNumbers = List.of(1, 19, 15, 10, -10).stream()
.collect(Collectors.filtering(i -> i % 2 == 1, Collectors.toList()));
Let’s look at how to filter a built-in JDK List and a MutableList using Eclipse Collections.
List<Integer> jdkList = Arrays.asList(1, 2, 3, 4, 5);
MutableList<Integer> ecList = Lists.mutable.with(1, 2, 3, 4, 5);
If you wanted to filter the numbers less than 3, you would expect the following outputs.
List<Integer> selected = Lists.mutable.with(1, 2);
List<Integer> rejected = Lists.mutable.with(3, 4, 5);
Here’s how you can filter using a Java 8 lambda as the Predicate.
Assert.assertEquals(selected, Iterate.select(jdkList, each -> each < 3));
Assert.assertEquals(rejected, Iterate.reject(jdkList, each -> each < 3));
Assert.assertEquals(selected, ecList.select(each -> each < 3));
Assert.assertEquals(rejected, ecList.reject(each -> each < 3));
Here’s how you can filter using an anonymous inner class as the Predicate.
Predicate<Integer> lessThan3 = new Predicate<Integer>()
{
public boolean accept(Integer each)
{
return each < 3;
}
};
Assert.assertEquals(selected, Iterate.select(jdkList, lessThan3));
Assert.assertEquals(selected, ecList.select(lessThan3));
Here are some alternatives to filtering JDK lists and Eclipse Collections MutableLists using the Predicates factory.
Assert.assertEquals(selected, Iterate.select(jdkList, Predicates.lessThan(3)));
Assert.assertEquals(selected, ecList.select(Predicates.lessThan(3)));
Here is a version that doesn't allocate an object for the predicate, by using the Predicates2 factory instead with the selectWith method that takes a Predicate2.
Assert.assertEquals(
selected, ecList.selectWith(Predicates2.<Integer>lessThan(), 3));
Sometimes you want to filter on a negative condition. There is a special method in Eclipse Collections for that called reject.
Assert.assertEquals(rejected, Iterate.reject(jdkList, lessThan3));
Assert.assertEquals(rejected, ecList.reject(lessThan3));
The method partition will return two collections, containing the elements selected by and rejected by the Predicate.
PartitionIterable<Integer> jdkPartitioned = Iterate.partition(jdkList, lessThan3);
Assert.assertEquals(selected, jdkPartitioned.getSelected());
Assert.assertEquals(rejected, jdkPartitioned.getRejected());
PartitionList<Integer> ecPartitioned = gscList.partition(lessThan3);
Assert.assertEquals(selected, ecPartitioned.getSelected());
Assert.assertEquals(rejected, ecPartitioned.getRejected());
Note: I am a committer for Eclipse Collections.
With the ForEach DSL you may write
import static ch.akuhn.util.query.Query.select;
import static ch.akuhn.util.query.Query.$result;
import ch.akuhn.util.query.Select;
Collection<String> collection = ...
for (Select<String> each : select(collection)) {
each.yield = each.value.length() > 3;
}
Collection<String> result = $result();
Given a collection of [The, quick, brown, fox, jumps, over, the, lazy, dog] this results in [quick, brown, jumps, over, lazy], ie all strings longer than three characters.
All iteration styles supported by the ForEach DSL are
AllSatisfy
AnySatisfy
Collect
Counnt
CutPieces
Detect
GroupedBy
IndexOf
InjectInto
Reject
Select
For more details, please refer to https://www.iam.unibe.ch/scg/svn_repos/Sources/ForEach
The Collections2.filter(Collection,Predicate) method in Google's Guava library does just what you're looking for.
This, combined with the lack of real closures, is my biggest gripe for Java.
Honestly, most of the methods mentioned above are pretty easy to read and REALLY efficient; however, after spending time with .Net, Erlang, etc... list comprehension integrated at the language level makes everything so much cleaner. Without additions at the language level, Java just cant be as clean as many other languages in this area.
If performance is a huge concern, Google collections is the way to go (or write your own simple predicate utility). Lambdaj syntax is more readable for some people, but it is not quite as efficient.
And then there is a library I wrote. I will ignore any questions in regard to its efficiency (yea, its that bad)...... Yes, i know its clearly reflection based, and no I don't actually use it, but it does work:
LinkedList<Person> list = ......
LinkedList<Person> filtered =
Query.from(list).where(Condition.ensure("age", Op.GTE, 21));
OR
LinkedList<Person> list = ....
LinkedList<Person> filtered = Query.from(list).where("x => x.age >= 21");
In Java 8, You can directly use this filter method and then do that.
List<String> lines = Arrays.asList("java", "pramod", "example");
List<String> result = lines.stream()
.filter(line -> !"pramod".equals(line))
.collect(Collectors.toList());
result.forEach(System.out::println);
JFilter http://code.google.com/p/jfilter/ is best suited for your requirement.
JFilter is a simple and high performance open source library to query collection of Java beans.
Key features
Support of collection (java.util.Collection, java.util.Map and Array) properties.
Support of collection inside collection of any depth.
Support of inner queries.
Support of parameterized queries.
Can filter 1 million records in few 100 ms.
Filter ( query) is given in simple json format, it is like Mangodb queries. Following are some examples.
{ "id":{"$le":"10"}
where object id property is less than equals to 10.
{ "id": {"$in":["0", "100"]}}
where object id property is 0 or 100.
{"lineItems":{"lineAmount":"1"}}
where lineItems collection property of parameterized type has lineAmount equals to 1.
{ "$and":[{"id": "0"}, {"billingAddress":{"city":"DEL"}}]}
where id property is 0 and billingAddress.city property is DEL.
{"lineItems":{"taxes":{ "key":{"code":"GST"}, "value":{"$gt": "1.01"}}}}
where lineItems collection property of parameterized type which has taxes map type property of parameteriszed type has code equals to GST value greater than 1.01.
{'$or':[{'code':'10'},{'skus': {'$and':[{'price':{'$in':['20', '40']}}, {'code':'RedApple'}]}}]}
Select all products where product code is 10 or sku price in 20 and 40 and sku code is "RedApple".
I wrote an extended Iterable class that support applying functional algorithms without copying the collection content.
Usage:
List<Integer> myList = new ArrayList<Integer>(){ 1, 2, 3, 4, 5 }
Iterable<Integer> filtered = Iterable.wrap(myList).select(new Predicate1<Integer>()
{
public Boolean call(Integer n) throws FunctionalException
{
return n % 2 == 0;
}
})
for( int n : filtered )
{
System.out.println(n);
}
The code above will actually execute
for( int n : myList )
{
if( n % 2 == 0 )
{
System.out.println(n);
}
}
Use Collection Query Engine (CQEngine). It is by far the fastest way to do this.
See also: How do you query object collections in Java (Criteria/SQL-like)?
Using java 8, specifically lambda expression, you can do it simply like the below example:
myProducts.stream().filter(prod -> prod.price>10).collect(Collectors.toList())
where for each product inside myProducts collection, if prod.price>10, then add this product to the new filtered list.
Some really great great answers here. Me, I'd like to keep thins as simple and readable as possible:
public abstract class AbstractFilter<T> {
/**
* Method that returns whether an item is to be included or not.
* #param item an item from the given collection.
* #return true if this item is to be included in the collection, false in case it has to be removed.
*/
protected abstract boolean excludeItem(T item);
public void filter(Collection<T> collection) {
if (CollectionUtils.isNotEmpty(collection)) {
Iterator<T> iterator = collection.iterator();
while (iterator.hasNext()) {
if (excludeItem(iterator.next())) {
iterator.remove();
}
}
}
}
}
The simple pre-Java8 solution:
ArrayList<Item> filtered = new ArrayList<Item>();
for (Item item : items) if (condition(item)) filtered.add(item);
Unfortunately this solution isn't fully generic, outputting a list rather than the type of the given collection. Also, bringing in libraries or writing functions that wrap this code seems like overkill to me unless the condition is complex, but then you can write a function for the condition.
https://code.google.com/p/joquery/
Supports different possibilities,
Given collection,
Collection<Dto> testList = new ArrayList<>();
of type,
class Dto
{
private int id;
private String text;
public int getId()
{
return id;
}
public int getText()
{
return text;
}
}
Filter
Java 7
Filter<Dto> query = CQ.<Dto>filter(testList)
.where()
.property("id").eq().value(1);
Collection<Dto> filtered = query.list();
Java 8
Filter<Dto> query = CQ.<Dto>filter(testList)
.where()
.property(Dto::getId)
.eq().value(1);
Collection<Dto> filtered = query.list();
Also,
Filter<Dto> query = CQ.<Dto>filter()
.from(testList)
.where()
.property(Dto::getId).between().value(1).value(2)
.and()
.property(Dto::grtText).in().value(new string[]{"a","b"});
Sorting (also available for the Java 7)
Filter<Dto> query = CQ.<Dto>filter(testList)
.orderBy()
.property(Dto::getId)
.property(Dto::getName)
Collection<Dto> sorted = query.list();
Grouping (also available for the Java 7)
GroupQuery<Integer,Dto> query = CQ.<Dto,Dto>query(testList)
.group()
.groupBy(Dto::getId)
Collection<Grouping<Integer,Dto>> grouped = query.list();
Joins (also available for the Java 7)
Given,
class LeftDto
{
private int id;
private String text;
public int getId()
{
return id;
}
public int getText()
{
return text;
}
}
class RightDto
{
private int id;
private int leftId;
private String text;
public int getId()
{
return id;
}
public int getLeftId()
{
return leftId;
}
public int getText()
{
return text;
}
}
class JoinedDto
{
private int leftId;
private int rightId;
private String text;
public JoinedDto(int leftId,int rightId,String text)
{
this.leftId = leftId;
this.rightId = rightId;
this.text = text;
}
public int getLeftId()
{
return leftId;
}
public int getRightId()
{
return rightId;
}
public int getText()
{
return text;
}
}
Collection<LeftDto> leftList = new ArrayList<>();
Collection<RightDto> rightList = new ArrayList<>();
Can be Joined like,
Collection<JoinedDto> results = CQ.<LeftDto, LeftDto>query().from(leftList)
.<RightDto, JoinedDto>innerJoin(CQ.<RightDto, RightDto>query().from(rightList))
.on(LeftFyo::getId, RightDto::getLeftId)
.transformDirect(selection -> new JoinedDto(selection.getLeft().getText()
, selection.getLeft().getId()
, selection.getRight().getId())
)
.list();
Expressions
Filter<Dto> query = CQ.<Dto>filter()
.from(testList)
.where()
.exec(s -> s.getId() + 1).eq().value(2);
My answer builds on that from Kevin Wong, here as a one-liner using CollectionUtils from spring and a Java 8 lambda expression.
CollectionUtils.filter(list, p -> ((Person) p).getAge() > 16);
This is as concise and readable as any alternative I have seen (without using aspect-based libraries)
Spring CollectionUtils is available from spring version 4.0.2.RELEASE, and remember you need JDK 1.8 and language level 8+.
I needed to filter a list depending on the values already present in the list. For example, remove all values following that is less than the current value. {2 5 3 4 7 5} -> {2 5 7}. Or for example to remove all duplicates {3 5 4 2 3 5 6} -> {3 5 4 2 6}.
public class Filter {
public static <T> void List(List<T> list, Chooser<T> chooser) {
List<Integer> toBeRemoved = new ArrayList<>();
leftloop:
for (int right = 1; right < list.size(); ++right) {
for (int left = 0; left < right; ++left) {
if (toBeRemoved.contains(left)) {
continue;
}
Keep keep = chooser.choose(list.get(left), list.get(right));
switch (keep) {
case LEFT:
toBeRemoved.add(right);
continue leftloop;
case RIGHT:
toBeRemoved.add(left);
break;
case NONE:
toBeRemoved.add(left);
toBeRemoved.add(right);
continue leftloop;
}
}
}
Collections.sort(toBeRemoved, new Comparator<Integer>() {
#Override
public int compare(Integer o1, Integer o2) {
return o2 - o1;
}
});
for (int i : toBeRemoved) {
if (i >= 0 && i < list.size()) {
list.remove(i);
}
}
}
public static <T> void List(List<T> list, Keeper<T> keeper) {
Iterator<T> iterator = list.iterator();
while (iterator.hasNext()) {
if (!keeper.keep(iterator.next())) {
iterator.remove();
}
}
}
public interface Keeper<E> {
boolean keep(E obj);
}
public interface Chooser<E> {
Keep choose(E left, E right);
}
public enum Keep {
LEFT, RIGHT, BOTH, NONE;
}
}
This will bee used like this.
List<String> names = new ArrayList<>();
names.add("Anders");
names.add("Stefan");
names.add("Anders");
Filter.List(names, new Filter.Chooser<String>() {
#Override
public Filter.Keep choose(String left, String right) {
return left.equals(right) ? Filter.Keep.LEFT : Filter.Keep.BOTH;
}
});
In my case, I was looking for list with specific field null excluded.
This could be done with for loop and fill the temporary list of objects who have no null addresses.
but Thanks to Java 8 Streams
List<Person> personsList = persons.stream()
.filter(p -> p.getAdrress() != null).collect(Collectors.toList());
#java #collection #collections #java8 #streams
With Guava:
Collection<Integer> collection = Lists.newArrayList(1, 2, 3, 4, 5);
Iterators.removeIf(collection.iterator(), new Predicate<Integer>() {
#Override
public boolean apply(Integer i) {
return i % 2 == 0;
}
});
System.out.println(collection); // Prints 1, 3, 5
An alternative (more lightweight) alternative to Java collection streams is the Ocl.java library, which uses vanilla collections and lambdas: https://github.com/eclipse/agileuml/blob/master/Ocl.java
For example, a simple filter and sum on an ArrayList words
could be:
ArrayList<Word> sel = Ocl.selectSequence(words,
w -> w.pos.equals("NN"));
int total = Ocl.sumint(Ocl.collectSequence(sel,
w -> w.text.length()));
Where Word has String pos; String text; attributes. Efficiency seems similar to the streams option, eg, 10000 words are processed in about 50ms in both versions.
There are equivalent OCL libraries for Python, Swift, etc. Basically Java collection streams has re-invented the OCL operations ->select, ->collect, etc, which existed in OCL since 1998.
This is a homework lab for school. I am trying to reverse a LinkedList, and check if it is a palindrome (the same backwards and forwards). I saw similar questions online, but not many that help me with this. I have made programs that check for palindromes before, but none that check an array or list. So, first, here is my isPalindrome method:
public static <E> boolean isPalindrome(Collection<E> c) {
Collection<E> tmp = c;
System.out.println(tmp);
Collections.reverse((List<E>) c);
System.out.println(c);
if(tmp == c) { return true; } else { return false; }
}
My professor wants us to set the method up to accept all collections which is why I used Collection and cast it as a list for the reverse method, but I'm not sure if that is done correctly. I know that it does reverse the list. Here is my main method:
public static void main(String...strings) {
Integer[] arr2 = {1,3,1,1,2};
LinkedList<Integer> ll2 = new LinkedList<Integer>(Arrays.asList(arr2));
if(isPalindrome(ll2)) { System.out.println("Successful!"); }
}
The problem is, I am testing this with an array that is not a palindrome, meaning it is not the same backwards as it is forwards. I already tested it using the array {1,3,1} and it works fine because that is a palindrome. Using {1,3,1,1,2} still returns true for palindrome, though it is clearly not. Here is my output using the {1,3,1,1,2} array:
[1, 3, 1, 1, 2]
[2, 1, 1, 3, 1]
Successful!
So, it seems to be properly reversing the List, but when it compares them, it assumes they are equal? I believe there is an issue with the tmp == c and how it checks whether they are equal. I assume it just checks if it contains the same elements, but I'm not sure. I also tried tmp.equals(c), but it returned the same results. I'm just curious is there is another method that I can use or do I have to write a method to compare tmp and c?
Thank you in advance!
Tommy
In your code c and tmp are links to same collection and tmp == c will be always true. Your must clone your collection to new instance, for example: List<E> tmp = new ArrayList(c);.
Many small points
public static <E> boolean isPalindrome(Collection<E> c) {
List<E> list = new ArrayList<>(c);
System.out.println(list);
Collections.reverse(list);
System.out.println(list);
return list.equals(new ArrayList<E>(c));
}
Reverse only works on an ordered list.
One makes a copy of the collection.
One uses equals to compare collections.
public static void main(String...strings) {
int[] arr2 = {1, 3, 1, 1, 2};
//List<Integer> ll2 = new LinkedList<>(Arrays.asList(arr2));
List<Integer> ll2 = Arrays.asList(arr2);
if (isPalindrome(ll2)) { System.out.println("Successful!"); }
}
You need to copy the Collection to a List / array. This has to be done, since the only ordering defined for a Collection is the one of the iterator.
Object[] asArray = c.toArray();
You can apply the algorithm of your choice for checking if this array is a palindrom to check, if the Collection is a palindrom.
Alternatively using LinkedList it would be more efficient to check, if the list is a palindrom without creating a new List to reverse:
public static <E> boolean isPalindrome(Collection<E> c) {
List<E> list = new LinkedList<>(c);
Iterator<E> startIterator = list.iterator();
ListIterator<E> endIterator = list.listIterator(list.size());
for (int i = list.size() / 2; i > 0; i--) {
if (!Objects.equals(startIterator.next(), endIterator.previous())) {
return false;
}
}
return true;
}
This is the follow up of compare sets
I have
Set<Set<Node>> NestedSet = new HashSet<Set<Node>>();
[[Node[0], Node[1], Node[2]], [Node[0], Node[2], Node[6]], [Node[3], Node[4], Node[5]] [Node[2], Node[6], Node[7]] ]
I want to merge the sets when there are two elements in common. For example 0,1,2 and 0,2,6 has two elements in common so merging them to form [0,1,2,6].
Again [0,1,2,6] and [2,6,7] has 2 and 6 common. so merging them and getting [0,1,2,6,7].
The final output should be :
[ [Node[0], Node[1], Node[2], Node[6], Node[7]], [Node[3], Node[4], Node[5]] ]
I tried like this :
for (Set<Node> s1 : NestedSet ) {
Optional<Set<Node>> findFirst = result.stream().filter(p -> { HashSet<Node> temp = new HashSet<>(s1);
temp.retainAll(p);
return temp.size() == 2; }).findFirst();
if (findFirst.isPresent()){
findFirst.get().addAll(s1);
}
else {
result.add(s1);
}
}
But the result I got was :
[[Node[0], Node[1], Node[2], Node[6], Node[7]], [Node[3], Node[4], Node[5]], [Node[0], Node[2], Node[6], Node[7]]]
Any idea ? Is there any way to get the desired output?
Some considerations:
Each time you apply a merge, you have to restart the procedure and iterate over the modified collection. Because of this, the iteration order of the input set is important, if you want your code to be deterministic you may want to use collections that give guarantees over their iteration order (e.g. use LinkedHashSet (not HashSet) or List.
Your current code has side effects as it modifies the supplied sets when merging. In general I think it helps to abstain from creating side effects whenever possible.
The following code does what you want:
static <T> List<Set<T>> mergeSets(Collection<? extends Set<T>> unmergedSets) {
final List<Set<T>> mergedSets = new ArrayList<>(unmergedSets);
List<Integer> mergeCandidate = Collections.emptyList();
do {
mergeCandidate = findMergeCandidate(mergedSets);
// apply the merge
if (!mergeCandidate.isEmpty()) {
// gather the sets to merge
final Set<T> mergedSet = Sets.union(
mergedSets.get(mergeCandidate.get(0)),
mergedSets.get(mergeCandidate.get(1)));
// removes both sets using their index, starts with the highest index
mergedSets.remove(mergeCandidate.get(0).intValue());
mergedSets.remove(mergeCandidate.get(1).intValue());
// add the mergedSet
mergedSets.add(mergedSet);
}
} while (!mergeCandidate.isEmpty());
return mergedSets;
}
// O(n^2/2)
static <T> List<Integer> findMergeCandidate(List<Set<T>> sets) {
for (int i = 0; i < sets.size(); i++) {
for (int j = i + 1; j < sets.size(); j++) {
if (Sets.intersection(sets.get(i), sets.get(j)).size() == 2) {
return Arrays.asList(j, i);
}
}
}
return Collections.emptyList();
}
For testing this method I created two helper methods:
static Set<Integer> set(int... ints) {
return new LinkedHashSet<>(Ints.asList(ints));
}
#SafeVarargs
static <T> Set<Set<T>> sets(Set<T>... sets) {
return new LinkedHashSet<>(Arrays.asList(sets));
}
These helper methods allow to write very readable tests, for example (using the numbers from the question):
public static void main(String[] args) {
// prints [[2, 6, 7, 0, 1]]
System.out.println(mergeSets(sets(set(0, 1, 2, 6), set(2, 6, 7))));
// prints [[3, 4, 5], [0, 2, 6, 1, 7]]
System.out.println(
mergeSets(sets(set(0, 1, 2), set(0, 2, 6), set(3, 4, 5), set(2, 6, 7))));
}
I'm not sure why you are getting that result, but I do see another problem with this code: It is order-dependent. For example, even if the code worked as intended, it would matter whether [Node[0], Node[1], Node[2]] is compared first to [Node[0], Node[2], Node[6]] or [Node[2], Node[6], Node[7]]. But Sets don't have a defined order, so the result is either non-deterministic or implementation-dependent, depending on how you look at it.
If you really want deterministic order-dependent operations here, you should be using List<Set<Node>>, rather than Set<Set<Node>>.
Here's a clean approach using recursion:
public static <T> Set<Set<T>> mergeIntersectingSets(Collection<? extends Set<T>> unmergedSets) {
boolean edited = false;
Set<Set<T>> mergedSets = new HashSet<>();
for (Set<T> subset1 : unmergedSets) {
boolean merged = false;
// if at least one element is contained in another subset, then merge the subsets
for (Set<T> subset2 : mergedSets) {
if (!Collections.disjoint(subset1, subset2)) {
subset2.addAll(subset1);
merged = true;
edited = true;
}
}
// otherwise, add the current subset as a new subset
if (!merged) mergedSets.add(subset1);
}
if (edited) return mergeIntersectingSets(mergedSets); // continue merging until we reach a fixpoint
else return mergedSets;
}
If I have a list with integers, is there a way to construct another list, where integers are summed if the difference to the head of the new list is below a threashold? I would like to solve this using Java 8 streams. It should work similar to the Scan operator of RxJava.
Example: 5, 2, 2, 5, 13
Threashold: 2
Result: 5, 9, 13
Intermediate results:
5
5, 2
5, 4 (2 and 2 summed)
5, 9 (4 and 5 summed)
5, 9, 13
Sequential Stream solution may look like this:
List<Integer> result = Stream.of(5, 2, 2, 5, 13).collect(ArrayList::new, (list, n) -> {
if(!list.isEmpty() && Math.abs(list.get(list.size()-1)-n) < 2)
list.set(list.size()-1, list.get(list.size()-1)+n);
else
list.add(n);
}, (l1, l2) -> {throw new UnsupportedOperationException();});
System.out.println(result);
Though it looks not much better as good old solution:
List<Integer> input = Arrays.asList(5, 2, 2, 5, 13);
List<Integer> list = new ArrayList<>();
for(Integer n : input) {
if(!list.isEmpty() && Math.abs(list.get(list.size()-1)-n) < 2)
list.set(list.size()-1, list.get(list.size()-1)+n);
else
list.add(n);
}
System.out.println(list);
Seems that your problem is not associative, so it cannot be easily parallelized. For example, if you split the input into two groups like this (5, 2), (2, 5, 13), you cannot say whether the first two items of the second group should be merged until the first group is processed. Thus I cannot specify the proper combiner function.
As Tagir Valeev observed, (+1) the combining function is not associative, so reduce() won't work, and it's not possible to write a combiner function for a Collector. Instead, this combining function needs to be applied left-to-right, with the previous partial result being fed into the next operation. This is called a fold-left operation, and unfortunately Java streams don't have such an operation.
(Should they? Let me know.)
It's possible to sort-of write your own fold-left operation using forEachOrdered while capturing and mutating an object to hold partial state. First, let's extract the combining function into its own method:
// extracted from Tagir Valeev's answer
void combine(List<Integer> list, int n) {
if (!list.isEmpty() && Math.abs(list.get(list.size()-1)-n) < 2)
list.set(list.size()-1, list.get(list.size()-1)+n);
else
list.add(n);
}
Then, create the initial result list and call the combining function from within forEachOrdered:
List<Integer> result = new ArrayList<>();
IntStream.of(5, 2, 2, 5, 13)
.forEachOrdered(n -> combine(result, n));
This gives the desired result of
[5, 9, 13]
In principle this can be done on a parallel stream, but performance will probably degrade to sequential given the semantics of forEachOrdered. Also note that the forEachOrdered operations are performed one at a time, so we needn't worry about thread safety of the data we're mutating.
I know that the Stream's masters "Tagir Valeev" and "Stuart Marks" already pointed out that reduce() will not work because the combining function is not associative, and I'm risking a couple of downvotes here. Anyway:
What about if we force the stream to be sequential? Wouldn't we be able then to use reduce? Isn't the associativity property only needed when using parallelism?
Stream<Integer> s = Stream.of(5, 2, 2, 5, 13);
LinkedList<Integer> result = s.sequential().reduce(new LinkedList<Integer>(),
(list, el) -> {
if (list.isEmpty() || Math.abs(list.getLast() - el) >= 2) {
list.add(el);
} else {
list.set(list.size() - 1, list.getLast() + el);
}
return list;
}, (list1, list2) -> {
//don't really needed, as we are sequential
list1.addAll(list2); return list1;
});
Java 8 way is define custom IntSpliterator class:
static class IntThreasholdSpliterator extends Spliterators.AbstractIntSpliterator {
private PrimitiveIterator.OfInt it;
private int threashold;
private int sum;
public IntThreasholdSpliterator(int threashold, IntStream stream, long est) {
super(est, ORDERED );
this.it = stream.iterator();
this.threashold = threashold;
}
#Override
public boolean tryAdvance(IntConsumer action) {
if(!it.hasNext()){
return false;
}
int next = it.nextInt();
if(next<threashold){
sum += next;
}else {
action.accept(next + sum);
sum = 0;
}
return true;
}
}
public static void main( String[] args )
{
IntThreasholdSpliterator s = new IntThreasholdSpliterator(3, IntStream.of(5, 2, 2, 5, 13), 5);
List<Integer> rs= StreamSupport.intStream(s, false).mapToObj(Integer::valueOf).collect(toList());
System.out.println(rs);
}
Also you can hack it as
List<Integer> list = Arrays.asList(5, 2, 2, 5, 13);
int[] sum = {0};
list = list.stream().filter(s -> {
if(s<=2) sum[0]+=s;
return s>2;
}).map(s -> {
int rs = s + sum[0];
sum[0] = 0;
return rs;
}).collect(toList());
System.out.println(list);
But I am not sure that this hack is good idea for production code.
I have one Arraylist of String and I have added Some Duplicate Value in that. and i just wanna remove that Duplicate value So how to remove it.
Here Example I got one Idea.
List<String> list = new ArrayList<String>();
list.add("Krishna");
list.add("Krishna");
list.add("Kishan");
list.add("Krishn");
list.add("Aryan");
list.add("Harm");
System.out.println("List"+list);
for (int i = 1; i < list.size(); i++) {
String a1 = list.get(i);
String a2 = list.get(i-1);
if (a1.equals(a2)) {
list.remove(a1);
}
}
System.out.println("List after short"+list);
But is there any Sufficient way remove that Duplicate form list. with out using For loop ?
And ya i can do it by using HashSet or some other way but using array list only.
would like to have your suggestion for that. thank you for your answer in advance.
You can create a LinkedHashSet from the list. The LinkedHashSet will contain each element only once, and in the same order as the List. Then create a new List from this LinkedHashSet. So effectively, it's a one-liner:
list = new ArrayList<String>(new LinkedHashSet<String>(list))
Any approach that involves List#contains or List#remove will probably decrease the asymptotic running time from O(n) (as in the above example) to O(n^2).
EDIT For the requirement mentioned in the comment: If you want to remove duplicate elements, but consider the Strings as equal ignoring the case, then you could do something like this:
Set<String> toRetain = new TreeSet<String>(String.CASE_INSENSITIVE_ORDER);
toRetain.addAll(list);
Set<String> set = new LinkedHashSet<String>(list);
set.retainAll(new LinkedHashSet<String>(toRetain));
list = new ArrayList<String>(set);
It will have a running time of O(n*logn), which is still better than many other options. Note that this looks a little bit more complicated than it might have to be: I assumed that the order of the elements in the list may not be changed. If the order of the elements in the list does not matter, you can simply do
Set<String> set = new TreeSet<String>(String.CASE_INSENSITIVE_ORDER);
set.addAll(list);
list = new ArrayList<String>(set);
if you want to use only arraylist then I am worried there is no better way which will create a huge performance benefit. But by only using arraylist i would check before adding into the list like following
void addToList(String s){
if(!yourList.contains(s))
yourList.add(s);
}
In this cases using a Set is suitable.
You can make use of Google Guava utilities, as shown below
list = ImmutableSet.copyOf(list).asList();
This is probably the most efficient way of eliminating the duplicates from the list and interestingly, it preserves the iteration order as well.
UPDATE
But, in case, you don't want to involve Guava then duplicates can be removed as shown below.
ArrayList<String> list = new ArrayList<String>();
list.add("Krishna");
list.add("Krishna");
list.add("Kishan");
list.add("Krishn");
list.add("Aryan");
list.add("Harm");
System.out.println("List"+list);
HashSet hs = new HashSet();
hs.addAll(list);
list.clear();
list.addAll(hs);
But, of course, this will destroys the iteration order of the elements in the ArrayList.
Shishir
Java 8 stream function
You could use the distinct function like above to get the distinct elements of the list,
stringList.stream().distinct();
From the documentation,
Returns a stream consisting of the distinct elements (according to Object.equals(Object)) of this stream.
Another way, if you do not wish to use the equals method is by using the collect function like this,
stringList.stream()
.collect(Collectors.toCollection(() ->
new TreeSet<String>((p1, p2) -> p1.compareTo(p2))
));
From the documentation,
Performs a mutable reduction operation on the elements of this stream using a Collector.
Hope that helps.
Simple function for removing duplicates from list
private void removeDuplicates(List<?> list)
{
int count = list.size();
for (int i = 0; i < count; i++)
{
for (int j = i + 1; j < count; j++)
{
if (list.get(i).equals(list.get(j)))
{
list.remove(j--);
count--;
}
}
}
}
Example:
Input: [1, 2, 2, 3, 1, 3, 3, 2, 3, 1, 2, 3, 3, 4, 4, 4, 1]
Output: [1, 2, 3, 4]
List<String> list = new ArrayList<String>();
list.add("Krishna");
list.add("Krishna");
list.add("Kishan");
list.add("Krishn");
list.add("Aryan");
list.add("Harm");
HashSet<String> hs=new HashSet<>(list);
System.out.println("=========With Duplicate Element========");
System.out.println(list);
System.out.println("=========Removed Duplicate Element========");
System.out.println(hs);
I don't think the list = new ArrayList<String>(new LinkedHashSet<String>(list)) is not the best way , since we are using the LinkedHashset(We could use directly LinkedHashset instead of ArrayList),
Solution:
import java.util.ArrayList;
public class Arrays extends ArrayList{
#Override
public boolean add(Object e) {
if(!contains(e)){
return super.add(e);
}else{
return false;
}
}
public static void main(String[] args) {
Arrays element=new Arrays();
element.add(1);
element.add(2);
element.add(2);
element.add(3);
System.out.println(element);
}
}
Output:
[1, 2, 3]
Here I am extending the ArrayList , as I am using the it with some changes by overriding the add method.
public List<Contact> removeDuplicates(List<Contact> list) {
// Set set1 = new LinkedHashSet(list);
Set set = new TreeSet(new Comparator() {
#Override
public int compare(Object o1, Object o2) {
if(((Contact)o1).getId().equalsIgnoreCase(((Contact)2).getId()) ) {
return 0;
}
return 1;
}
});
set.addAll(list);
final List newList = new ArrayList(set);
return newList;
}
This will be the best way
List<String> list = new ArrayList<String>();
list.add("Krishna");
list.add("Krishna");
list.add("Kishan");
list.add("Krishn");
list.add("Aryan");
list.add("Harm");
Set<String> set=new HashSet<>(list);
It is better to use HastSet
1-a) A HashSet holds a set of objects, but in a way that it allows you to easily and quickly determine whether an object is already in the set or not. It does so by internally managing an array and storing the object using an index which is calculated from the hashcode of the object. Take a look here
1-b) HashSet is an unordered collection containing unique elements. It has the standard collection operations Add, Remove, Contains, but since it uses a hash-based implementation, these operation are O(1). (As opposed to List for example, which is O(n) for Contains and Remove.) HashSet also provides standard set operations such as union, intersection, and symmetric difference.Take a look here
2) There are different implementations of Sets. Some make insertion and lookup operations super fast by hashing elements. However that means that the order in which the elements were added is lost. Other implementations preserve the added order at the cost of slower running times.
The HashSet class in C# goes for the first approach, thus not preserving the order of elements. It is much faster than a regular List. Some basic benchmarks showed that HashSet is decently faster when dealing with primary types (int, double, bool, etc.). It is a lot faster when working with class objects. So that point is that HashSet is fast.
The only catch of HashSet is that there is no access by indices. To access elements you can either use an enumerator or use the built-in function to convert the HashSet into a List and iterate through that.Take a look here
Without a loop, No! Since ArrayList is indexed by order rather than by key, you can not found the target element without iterate the whole list.
A good practice of programming is to choose proper data structure to suit your scenario. So if Set suits your scenario the most, the discussion of implementing it with List and trying to find the fastest way of using an improper data structure makes no sense.
public static void main(String[] args) {
#SuppressWarnings("serial")
List<Object> lst = new ArrayList<Object>() {
#Override
public boolean add(Object e) {
if(!contains(e))
return super.add(e);
else
return false;
}
};
lst.add("ABC");
lst.add("ABC");
lst.add("ABCD");
lst.add("ABCD");
lst.add("ABCE");
System.out.println(lst);
}
This is the better way
list = list.stream().distinct().collect(Collectors.toList());
This could be one of the solutions using Java8 Stream API. Hope this helps.
public void removeDuplicates() {
ArrayList<Object> al = new ArrayList<Object>();
al.add("java");
al.add('a');
al.add('b');
al.add('a');
al.add("java");
al.add(10.3);
al.add('c');
al.add(14);
al.add("java");
al.add(12);
System.out.println("Before Remove Duplicate elements:" + al);
for (int i = 0; i < al.size(); i++) {
for (int j = i + 1; j < al.size(); j++) {
if (al.get(i).equals(al.get(j))) {
al.remove(j);
j--;
}
}
}
System.out.println("After Removing duplicate elements:" + al);
}
Before Remove Duplicate elements:
[java, a, b, a, java, 10.3, c, 14, java, 12]
After Removing duplicate elements:
[java, a, b, 10.3, c, 14, 12]
Using java 8:
public static <T> List<T> removeDuplicates(List<T> list) {
return list.stream().collect(Collectors.toSet()).stream().collect(Collectors.toList());
}
In case you just need to remove the duplicates using only ArrayList, no other Collection classes, then:-
//list is the original arraylist containing the duplicates as well
List<String> uniqueList = new ArrayList<String>();
for(int i=0;i<list.size();i++) {
if(!uniqueList.contains(list.get(i)))
uniqueList.add(list.get(i));
}
Hope this helps!
private static void removeDuplicates(List<Integer> list)
{
Collections.sort(list);
int count = list.size();
for (int i = 0; i < count; i++)
{
if(i+1<count && list.get(i)==list.get(i+1)){
list.remove(i);
i--;
count--;
}
}
}
public static List<String> removeDuplicateElements(List<String> array){
List<String> temp = new ArrayList<String>();
List<Integer> count = new ArrayList<Integer>();
for (int i=0; i<array.size()-2; i++){
for (int j=i+1;j<array.size()-1;j++)
{
if (array.get(i).compareTo(array.get(j))==0) {
count.add(i);
int kk = i;
}
}
}
for (int i = count.size()+1;i>0;i--) {
array.remove(i);
}
return array;
}
}