Vavr: Howto flatmap collection inside optional object - java

Is any easiest way to write this code below, without using toStream()?
import io.vavr.collection.List;
import io.vavr.control.Option;
import lombok.Value;
public class VavrDemo {
public static void main(String[] args) {
Foo bar = new Foo(List.of(new Bar(1), new Bar(2)));
Number value = Option.some(bar)
.toStream() // <- WTF?!?
.flatMap(Foo::getBars)
.map(Bar::getValue)
.sum();
System.out.println(value);
}
#Value
static class Foo {
private List<Bar> bars;
}
#Value
static class Bar {
private int value;
}
}

Option is a so-called Monad. This just tells us that the flatMap function follows specific laws, namely
Let
A, B, C be types
unit: A -> Monad<A> a constructor
f: A -> Monad<B>, g: B -> Monad<C> functions
a be an object of type A
m be an object of type Monad<A>
Then all instances of the Monad interface should obey the Functor laws (omitted here) and the three control laws:
Left identity: unit(a).flatMap(f) ≡ f a
Right identity: m.flatMap(unit) ≡ m
Associativity: m.flatMap(f).flatMap(g) ≡ m.flatMap(x -> f.apply(x).flatMap(g))
Currently Vavr has (simplified):
interface Option<T> {
<U> Option<U> flatMap(Function<T, Option<U>> mapper) {
return isEmpty() ? none() : mapper.apply(get());
}
}
This version obeys the Monad laws.
It is not possible to define an Option.flatMap the way you want that still obeys the Monad laws. For example imagine a flatMap version that accepts a function with an Iterable as result. All Vavr collections have such a flatMap method but for Option it does not make sense:
interface Option<T> {
<U> Option<U> flatMap(Function<T, Iterable<U>> mapper) {
if (isEmpty()) {
return none();
} else {
Iterable<U> iterable = mapper.apply(get());
if (isEmpty(iterable)) {
return none();
} else {
U resultValue = whatToDoWith(iterable); // ???
return some(resultValue);
}
}
}
}
You see? The best thing we can do is to take just one element of the iterable in case it is not empty. Beside it does not give use the result you may have expected (in VavrTest above), we can proof that this 'phantasy' version of flatMap does break the Monad laws.
If you are stuck in such a situation, consider to change your calls slightly. For example the VavrTest can be expressed like this:
Number value = Option.some(bar)
.map(b -> b.getBars().map(Bar::getValue).sum())
.getOrElse(0);
I hope this helps and the Monad section above does not completely scare you away. In fact, developers do not need to know anything about Monads in order to take advantage of Vavr.
Disclaimer: I'm the creator of Vavr (formerly: Javaslang)

How about using .fold() or .getOrElse()?
Option.some(bar)
.fold(List::<Bar>empty, Foo::getBars)
.map(Bar::getValue)
.sum();
Option.some(bar)
.map(Foo::getBars)
.getOrElse(List::empty)
.map(Bar::getValue)
.sum();

Related

Custom "sumDouble()" function in OptaPlanner for constraints

I need a sum functionality that sums up double values for my ConstraintProviding functionality. Currently OptaPlanner offers sum() and sumBigDecimal() functionality, where the first is only summing integer values and the second BigDecimal values. So therefore I started with the compose approach as described in the manual chapter 6.4.5.3. for implementing my own functionality (didn't want to override the original one).
Deriving from there and taking the implementation of the sum functionality from the ConstraintCollector.java class of the OptaPlanner source code itself, I ended up with the following code:
public static <A> UniConstraintCollector<A, ?, Double> sumDouble(ToDoubleFunction<A> groupValueMapping) {
return compose((resultContainer, a) -> {
double value = groupValueMapping.applyAsDouble(a);
resultContainer[0] += value;
return () -> resultContainer[0] -= value;
},
resultContainer -> resultContainer[0]);
}
within my "OwnConstraintProvider" class. But this doesn't work out. The error is:
java: method compose in interface java.lang.module.ModuleFinder cannot be applied to given types;
required: java.lang.module.ModuleFinder[]
found: (resultCon[...]ue; },(resultCon[...]er[0]
reason: varargs mismatch; java.lang.module.ModuleFinder is not a functional interface
multiple non-overriding abstract methods found in interface java.lang.module.ModuleFinder
I am aware that there must be a clearer relationship and calculation approach between the input A and the result.
Frankly speaking I have only recently starting Java programming seriously. So I can't sort out where I am mistaken in that case. In the manual of the current version used (8.19.0) there is "a generic sum() variant for summing up custom types" mentioned in chapter 6.4.5.1.3. but I have no glue about the details on that.
Can anybody give me a hint on this please.
Thanks in advance!
First of all, Radovan is completely correct in his answer. In fact, the potential score corruptions are the reason why sumDouble() is not provided. Instead, we provide sumBigDecimal(), which doesn't suffer from the same issue. However, it will suffer in terms of performance. The preferred solution is to use either sum() or sumLong(), using fixed-point arithmetic if necessary.
That said, implementing sumDouble() is relatively simple, and you do not need composition to achieve that:
public static <A> UniConstraintCollector<A, ?, Double> sum(ToDoubleFunction<? super A> groupValueMapping) {
return new DefaultUniConstraintCollector<>(
() -> new double[1],
(resultContainer, a) -> {
int value = groupValueMapping.applyAsDouble(a);
resultContainer[0] += value;
return () -> resultContainer[0] -= value;
},
resultContainer -> resultContainer[0]);
}
Now, DefaultUniConstraintCollector is not a public type. But you can use an anonymous class instead:
public static <A> UniConstraintCollector<A, ?, Integer> sum(ToDoubleFunction<? super A> groupValueMapping) {
return new UniConstraintCollector<A, double[], Double>() {
#Override
public Supplier<double[]> supplier() {
return () -> new double[1];
}
#Override
public BiFunction<double[], A, Runnable> accumulator() {
return (resultContainer, a) -> {
double value = groupValueMapping.applyAsDouble(a);
resultContainer[0] += value;
return () -> resultContainer[0] -= value;
};
}
#Override
public Function<double[], Double> finisher() {
return resultContainer -> resultContainer[0];
}
}
}
Use this at your own risk, and make sure you check for score corruptions, preferrably in a very long solver run.
Have you considered using a different type, e.g. long, to represent values you need to sum() in your constraint?
Using floating-point numbers in the score calculation is generally not recommended as it may lead to score corruption.

Nested null checks in Java8 Optional vs Stream

This is a question regarding a possibly confusing Streams behavior. I came under the impression that the .map operaion (because of it's usage in Optional) was always null-safe (I'm aware that these .map's are different implementations thought they share the same name). And I was quite surprised when I got a NPE when I used it so in a (list) stream. Since then, I started using Objects::nonNull with streams (both with .map & .flatMap operations).
Q1. Why is it that Optional can handle nulls at any level, whereas Streams can't (at any level), as shown in my test code below? If this is the sensible and desirable behavior, please give an explanation (as to it's benefits, or the downsides of List Stream behaving like Optional).
Q2. As a follow up, is there an alternative to the excessive null-checks that I perform in the getValues method below (which is what prompted me to think why Streams could not behave like Optional).
In the below test, I'm interested in the innermost class's value field only.
I use Optional in getValue method.
I use Streams on list in getValues method. And I cannot remove a single nonNull check in this case.
import lombok.AllArgsConstructor;
import lombok.Getter;
import java.util.Arrays;
import java.util.List;
import java.util.Objects;
import java.util.Optional;
import java.util.stream.Collectors;
public class NestedObjectsStreamTest {
#Getter #AllArgsConstructor
private static class A {
private B b;
}
#Getter #AllArgsConstructor
private static class B {
private C c;
}
#Getter #AllArgsConstructor
private static class C {
private D d;
}
#Getter #AllArgsConstructor
private static class D {
private String value;
}
public static void main(String[] args) {
A a0 = new A(new B(new C(new D("a0"))));
A a1 = new A(new B(new C(new D("a1"))));
A a2 = new A(new B(new C(new D(null))));
A a3 = new A(new B(new C(null)));
A a5 = new A(new B(null));
A a6 = new A(null);
A a7 = null;
System.out.println("getValue(a0) = " + getValue(a0));
System.out.println("getValue(a1) = " + getValue(a1));
System.out.println("getValue(a2) = " + getValue(a2));
System.out.println("getValue(a3) = " + getValue(a3));
System.out.println("getValue(a5) = " + getValue(a5));
System.out.println("getValue(a6) = " + getValue(a6));
System.out.println("getValue(a7) = " + getValue(a7));
List<A> aList = Arrays.asList(a0, a1, a2, a3, a5, a6, a7);
System.out.println("getValues(aList) " + getValues(aList));
}
private static String getValue(final A a) {
return Optional.ofNullable(a)
.map(A::getB)
.map(B::getC)
.map(C::getD)
.map(D::getValue)
.orElse("default");
}
private static List<String> getValues(final List<A> aList) {
return aList.stream()
.filter(Objects::nonNull)
.map(A::getB)
.filter(Objects::nonNull)
.map(B::getC)
.filter(Objects::nonNull)
.map(C::getD)
.filter(Objects::nonNull)
.map(D::getValue)
.filter(Objects::nonNull)
.collect(Collectors.toList());
}
}
Output
getValue(a0) = a0
getValue(a1) = a1
getValue(a2) = default
getValue(a3) = default
getValue(a5) = default
getValue(a6) = default
getValue(a7) = default
getValues(aList) [a0, a1]
Q1. Why is it that Optional can handle nulls at any level, whereas Streams can't (at any level), as shown in my test code below?
A Stream can "contain" null values. An Optional can't, by contract (the contract is explained in the javadoc): either it's empty and map returns empty, or it's not empty and is then guaranteed to have a non-null value.
Q2. As a follow up, is there an alternative to the excessive null-checks that I perform in the getValues method below.
Favor designs which avoid using nulls all over the place.
Here are the codes you can try:
aList.stream()
.map(applyIfNotNull(A::getB))
.map(applyIfNotNull(B::getC))
.map(applyIfNotNull(C::getD))
.map(applyIfNotNullOrDefault(D::getValue, "default"))
.filter(Objects::nonNull)
.forEach(System.out::println);
With the below utility methods:
public static <T, U> Function<T, U> applyIfNotNull(Function<T, U> mapper) {
return t -> t != null ? mapper.apply(t) : null;
}
public static <T, U> Function<T, U> applyIfNotNullOrDefault(Function<T, U> mapper, U defaultValue) {
return t -> t != null ? mapper.apply(t) : defaultValue;
}
public static <T, U> Function<T, U> applyIfNotNullOrElseGet(Function<T, U> mapper, Supplier<U> supplier) {
return t -> t != null ? mapper.apply(t) : supplier.get();
}
Not sure how it looks to you. but I personally don't like map(...).map(...)....
Here what I like more:
aList.stream()
.map(applyIfNotNull(A::getB, B::getC, C::getD))
.map(applyIfNotNullOrDefault(D::getValue, "default"))
.filter(Objects::nonNull)
.forEach(System.out::println);
With One more utility method:
public static <T1, T2, T3, R> Function<T1, R> applyIfNotNull(Function<T1, T2> mapper1, Function<T2, T3> mapper2,
Function<T3, R> mapper3) {
return t -> {
if (t == null) {
return null;
} else {
T2 t2 = mapper1.apply(t);
if (t2 == null) {
return null;
} else {
T3 t3 = mapper2.apply(t2);
return t3 == null ? null : mapper3.apply(t3);
}
}
};
}
Q1. Why is it that Optional can handle nulls at any level, whereas
Streams can't (at any level), as shown in my test code below?
Optional was created to handle the null values on itself, i.e. where the programmer did not want to handle the Nulls by himself. The Optional.map() method converts the value to an Optional object. Thus handling the nulls and taking away the responsibility from the developers.
Streams, on the other hand, leaves the handling of nulls on the developers, i.e. what if a developer might want to handle the nulls in a different way. Look at this link. It provides different choices the developers of the Stream had and their way of reasoning on each of the cases.
Q2. As a follow-up, is there an alternative to the excessive
null-checks that I perform in the getValues method below
In cases of uses like you mentioned, where you don't want to handle the null cases, go with the Optional. As #JB Nizet said, avoid null scenarios in case of using streams or handle it yourself. This argues similarly. If you go through the first link I shared, you would probably get that Banning Null from a stream would be too harsh, and Absorbing Null would hamper the truthfulness of size() method and other functionalities.
Your Q1 has already been answered by raviiii1 and JB Nizet.
Regarding your Q2:
is there an alternative to the excessive null-checks that I perform in the getValues method below
You could always combine both Stream and Optional like this:
private static List<String> getValues(final List<A> aList) {
return aList.stream()
.map(Optional::ofNullable)
.map(opta -> opta.map(A::getB))
.map(optb -> optb.map(B::getC))
.map(optc -> optc.map(C::getD))
.map(optd -> optd.map(D::getValue))
.map(optv -> optv.orElse("default"))
.collect(Collectors.toList());
}
Of course, this would be much cleaner:
private static List<String> getValues(final List<A> aList) {
return aList.stream()
.map(NestedObjectsStreamTest::getValue)
.collect(Collectors.toList());
}

Explanation for v-> v>5

I have a given function call and java gives me an error because Objects are not comparable to ints (of course...). Can someone explain to me what I have to change?
I tried to brace the lambda expression differently but with no useful result. I think, that the lambda expression is correct and the filter-function is slightly wrong, but I'm not able to find out my mistake...
// function call
filter(v -> v > 5)
// function
public Optional<T> filter(Predicate<T> tester) {
if(isPresent() && tester.test(get())) {
return this;
} else {
return Optional.empty();
}
}
I would expect a Optional.empty-Object but I get a java-error because v > 5 Object v is not comparable to an int.
You have to make T a wrapper class which is comparable with an int. e.g.
IntStream.range(0, 10)
.filter(v -> v > 5)
.forEach(System.out::println);
is fine because v is an int.
You can't use this expression when T is unknown.
What you can do is assume the T must be a number e.g.
filter( v -> ((Number) v).doubleValue() > 5)
however this will produce a ClassCastExpection is T is another type.
The real solution is to make T a Number
e.g.
class MyClass<T extends Number> {
public Optional<T> filter(Predicate<T> test) {
or make it a specific type like int
class MyClass {
public IntOptional filter(IntPredicate test) {
In Java primitives types (e.g. int) and objects (e.g. Object) don't have a common ancestor in the type hierarchy. Due to that predicates and other stream constructs come in two flavors e.g. there is IntPredicate that you have to use when working with int and Predicate that you have to use when working with Object.
On way to write your filter function would be to use OptionalInt and IntPredicate:
public OptionalInt filter(IntPredicate tester) {
if (isPresent() && tester.test(get())) {
return ...
} else {
return OptionalInt.empty();
}
}
v -> v > 5 can mean different things. It depends on context.
It could be a (Object v) -> v > 5 causing a compilation error since > can't be applied to an Object:
Stream.<Object>of("123", 123).filter(v -> v > 5);
It could be a (Integer v) -> v > 5 meaning that unboxing and autoboxing will be performed in order to do the comparison and to return the result:
Stream.<Integer>of(123, 123).filter(v -> v > 5);
It could be a (int v) -> v > 5 meaning that it's an instance of IntPredicate and things will go smoothly here:
IntStream.of(123, 123).filter(v -> v > 5);
I think, that the lambda expression is correct and the
filter-function is slightly wrong, but I'm not able to find out my
mistake...
You are right.
Your method seems to defeat the generic type declared for the class as first of all your method is defined inside a generic class.
Supposing your class is named Foo, here the filter() method relies on the generic T type as return/parameter type :
public class Foo<T>{
// ...
public Optional<T> filter(Predicate<T> tester) {
// ...
}
}
It works with inference.
So you get Predicate of T. But the T depends on the generic type defined in the class and also from the way which you declared the instance of the Foo class.
And it appears that here T is not a Number.
As alternative you could also rely on inference from the declared Foo variable.
If you do that :
Foo<Integer> foo = new Foo<>();
Optional<Integer> optInt = foo.filter(v -> v > 5);
it will compile fine as Integer will be inferred from Foo<Integer>.
So I think that to solve your issue, you should either declare Number or Integer as base class of the generic type :
public class Foo<T extends Integer>{
// ...
public Optional<T> filter(Predicate<T> tester) {
// ...
}
}
or rely on the inference of the client as in the previous example.

Java 8 Streams: simplifying o1 -> Objects.equals(o1.getSome().getSomeOther(), o2.getSome().getSomeOther()) in a stream

Given the following code:
stream.filter(o1 -> Objects.equals(o1.getSome().getSomeOther(),
o2.getSome().getSomeOther())
How could that possibly be simplified?
Is there some equals-utility that lets you first extract a key just like there is Comparator.comparing which accepts a key extractor function?
Note that the code itself (getSome().getSomeOther()) is actually generated from a schema.
EDIT: (after discussing with a collegue and after revisiting: Is there a convenience method to create a Predicate that tests if a field equals a given value?)
We now have come to the following reusable functional interface:
#FunctionalInterface
public interface Property<T, P> {
P extract(T object);
default Predicate<T> like(T example) {
Predicate<P> equality = Predicate.isEqual(extract(example));
return (value) -> equality.test(extract(value));
}
}
and the following static convenience method:
static <T, P> Property<T, P> property(Property<T, P> property) {
return property;
}
The filtering now looks like:
stream.filter(property(t -> t.getSome().getSomeOther()).like(o2))
What I like on this solution in respect to the solution before: it clearly separates the extraction of the property and the creation of the Predicate itself and it states more clearly what is going on.
Previous solution:
<T, U> Predicate<T> isEqual(T other, Function<T, U> keyExtractFunction) {
U otherKey = keyExtractFunction.apply(other);
return t -> Objects.equals(keyExtractFunction.apply(t), otherKey);
}
which results in the following usage:
stream.filter(isEqual(o2, t -> t.getSome().getSomeOther())
but I am more then happy if anyone has a better solution.
I think that your question's approach is more readable than your answer's one. And I also think that using inline lambdas is fine, as long as the lambda is simple and short.
However, for maintainance, readability, debugging and testability reasons, I always move the logic I'd use in a lambda (either a predicate or function) to one or more methods. In your case, I would do:
class YourObject {
private Some some;
public boolean matchesSomeOther(YourObject o2) {
return this.getSome().matchesSomeOther(o2.getSome());
}
}
class Some {
private SomeOther someOther;
public boolean matchesSomeOther(Some some2) {
return Objects.isEqual(this.getSomeOther(), some2.getSomeOther());
}
}
With these methods in place, your predicate now becomes trivial:
YourClass o2 = ...;
stream.filter(o2::matchesSomeOther)

How to negate a method reference predicate

In Java 8, you can use a method reference to filter a stream, for example:
Stream<String> s = ...;
long emptyStrings = s.filter(String::isEmpty).count();
Is there a way to create a method reference that is the negation of an existing one, i.e. something like:
long nonEmptyStrings = s.filter(not(String::isEmpty)).count();
I could create the not method like below but I was wondering if the JDK offered something similar.
static <T> Predicate<T> not(Predicate<T> p) { return o -> !p.test(o); }
Predicate.not( … )
java-11 offers a new method Predicate#not
So you can negate the method reference:
Stream<String> s = ...;
long nonEmptyStrings = s.filter(Predicate.not(String::isEmpty)).count();
I'm planning to static import the following to allow for the method reference to be used inline:
public static <T> Predicate<T> not(Predicate<T> t) {
return t.negate();
}
e.g.
Stream<String> s = ...;
long nonEmptyStrings = s.filter(not(String::isEmpty)).count();
Update: Starting from Java-11, the JDK offers a similar solution built-in as well.
There is a way to compose a method reference that is the opposite of a current method reference. See #vlasec's answer below that shows how by explicitly casting the method reference to a Predicate and then converting it using the negate function. That is one way among a few other not too troublesome ways to do it.
The opposite of this:
Stream<String> s = ...;
int emptyStrings = s.filter(String::isEmpty).count();
is this:
Stream<String> s = ...;
int notEmptyStrings = s.filter(((Predicate<String>) String::isEmpty).negate()).count()
or this:
Stream<String> s = ...;
int notEmptyStrings = s.filter( it -> !it.isEmpty() ).count();
Personally, I prefer the later technique because I find it clearer to read it -> !it.isEmpty() than a long verbose explicit cast and then negate.
One could also make a predicate and reuse it:
Predicate<String> notEmpty = (String it) -> !it.isEmpty();
Stream<String> s = ...;
int notEmptyStrings = s.filter(notEmpty).count();
Or, if having a collection or array, just use a for-loop which is simple, has less overhead, and *might be **faster:
int notEmpty = 0;
for(String s : list) if(!s.isEmpty()) notEmpty++;
*If you want to know what is faster, then use JMH http://openjdk.java.net/projects/code-tools/jmh, and avoid hand benchmark code unless it avoids all JVM optimizations — see Java 8: performance of Streams vs Collections
**I am getting flak for suggesting that the for-loop technique is faster. It eliminates a stream creation, it eliminates using another method call (negative function for predicate), and it eliminates a temporary accumulator list/counter. So a few things that are saved by the last construct that might make it faster.
I do think it is simpler and nicer though, even if not faster. If the job calls for a hammer and a nail, don't bring in a chainsaw and glue! I know some of you take issue with that.
wish-list: I would like to see Java Stream functions evolve a bit now that Java users are more familiar with them. For example, the 'count' method in Stream could accept a Predicate so that this can be done directly like this:
Stream<String> s = ...;
int notEmptyStrings = s.count(it -> !it.isEmpty());
or
List<String> list = ...;
int notEmptyStrings = lists.count(it -> !it.isEmpty());
Predicate has methods and, or and negate.
However, String::isEmpty is not a Predicate, it's just a String -> Boolean lambda and it could still become anything, e.g. Function<String, Boolean>. Type inference is what needs to happen first. The filter method infers type implicitly. But if you negate it before passing it as an argument, it no longer happens. As #axtavt mentioned, explicit inference can be used as an ugly way:
s.filter(((Predicate<String>) String::isEmpty).negate()).count()
There are other ways advised in other answers, with static not method and lambda most likely being the best ideas. This concludes the tl;dr section.
However, if you want some deeper understanding of lambda type inference, I'd like to explain it a bit more to depth, using examples. Look at these and try to figure out what happens:
Object obj1 = String::isEmpty;
Predicate<String> p1 = s -> s.isEmpty();
Function<String, Boolean> f1 = String::isEmpty;
Object obj2 = p1;
Function<String, Boolean> f2 = (Function<String, Boolean>) obj2;
Function<String, Boolean> f3 = p1::test;
Predicate<Integer> p2 = s -> s.isEmpty();
Predicate<Integer> p3 = String::isEmpty;
obj1 doesn't compile - lambdas need to infer a functional interface (= with one abstract method)
p1 and f1 work just fine, each inferring a different type
obj2 casts a Predicate to Object - silly but valid
f2 fails at runtime - you cannot cast Predicate to Function, it's no longer about inference
f3 works - you call the predicate's method test that is defined by its lambda
p2 doesn't compile - Integer doesn't have isEmpty method
p3 doesn't compile either - there is no String::isEmpty static method with Integer argument
Building on other's answers and personal experience:
Predicate<String> blank = String::isEmpty;
content.stream()
.filter(blank.negate())
Another option is to utilize lambda casting in non-ambiguous contexts into one class:
public static class Lambdas {
public static <T> Predicate<T> as(Predicate<T> predicate){
return predicate;
}
public static <T> Consumer<T> as(Consumer<T> consumer){
return consumer;
}
public static <T> Supplier<T> as(Supplier<T> supplier){
return supplier;
}
public static <T, R> Function<T, R> as(Function<T, R> function){
return function;
}
}
... and then static import the utility class:
stream.filter(as(String::isEmpty).negate())
Shouldn't Predicate#negate be what you are looking for?
In this case u could use the org.apache.commons.lang3.StringUtilsand do
int nonEmptyStrings = s.filter(StringUtils::isNotEmpty).count();
I have written a complete utility class (inspired by Askar's proposal) that can take Java 8 lambda expression and turn them (if applicable) into any typed standard Java 8 lambda defined in the package java.util.function. You can for example do:
asPredicate(String::isEmpty).negate()
asBiPredicate(String::equals).negate()
Because there would be numerous ambiguities if all the static methods would be named just as(), I opted to call the method "as" followed by the returned type. This gives us full control of the lambda interpretation. Below is the first part of the (somewhat large) utility class revealing the pattern used.
Have a look at the complete class here (at gist).
public class FunctionCastUtil {
public static <T, U> BiConsumer<T, U> asBiConsumer(BiConsumer<T, U> biConsumer) {
return biConsumer;
}
public static <T, U, R> BiFunction<T, U, R> asBiFunction(BiFunction<T, U, R> biFunction) {
return biFunction;
}
public static <T> BinaryOperator<T> asBinaryOperator(BinaryOperator<T> binaryOperator) {
return binaryOperator;
}
... and so on...
}
You can use Predicates from Eclipse Collections
MutableList<String> strings = Lists.mutable.empty();
int nonEmptyStrings = strings.count(Predicates.not(String::isEmpty));
If you can't change the strings from List:
List<String> strings = new ArrayList<>();
int nonEmptyStrings = ListAdapter.adapt(strings).count(Predicates.not(String::isEmpty));
If you only need a negation of String.isEmpty() you can also use StringPredicates.notEmpty().
Note: I am a contributor to Eclipse Collections.
You can accomplish this as long emptyStrings = s.filter(s->!s.isEmpty()).count();
Tip: to negate a collection.stream().anyMatch(...), one can use collection.stream().noneMatch(...)
If you're using Spring Boot (2.0.0+) you can use:
import org.springframework.util.StringUtils;
...
.filter(StringUtils::hasLength)
...
Which does:
return (str != null && !str.isEmpty());
So it will have the required negation effect for isEmpty

Categories