In Java 8, you can use a method reference to filter a stream, for example:
Stream<String> s = ...;
long emptyStrings = s.filter(String::isEmpty).count();
Is there a way to create a method reference that is the negation of an existing one, i.e. something like:
long nonEmptyStrings = s.filter(not(String::isEmpty)).count();
I could create the not method like below but I was wondering if the JDK offered something similar.
static <T> Predicate<T> not(Predicate<T> p) { return o -> !p.test(o); }
Predicate.not( … )
java-11 offers a new method Predicate#not
So you can negate the method reference:
Stream<String> s = ...;
long nonEmptyStrings = s.filter(Predicate.not(String::isEmpty)).count();
I'm planning to static import the following to allow for the method reference to be used inline:
public static <T> Predicate<T> not(Predicate<T> t) {
return t.negate();
}
e.g.
Stream<String> s = ...;
long nonEmptyStrings = s.filter(not(String::isEmpty)).count();
Update: Starting from Java-11, the JDK offers a similar solution built-in as well.
There is a way to compose a method reference that is the opposite of a current method reference. See #vlasec's answer below that shows how by explicitly casting the method reference to a Predicate and then converting it using the negate function. That is one way among a few other not too troublesome ways to do it.
The opposite of this:
Stream<String> s = ...;
int emptyStrings = s.filter(String::isEmpty).count();
is this:
Stream<String> s = ...;
int notEmptyStrings = s.filter(((Predicate<String>) String::isEmpty).negate()).count()
or this:
Stream<String> s = ...;
int notEmptyStrings = s.filter( it -> !it.isEmpty() ).count();
Personally, I prefer the later technique because I find it clearer to read it -> !it.isEmpty() than a long verbose explicit cast and then negate.
One could also make a predicate and reuse it:
Predicate<String> notEmpty = (String it) -> !it.isEmpty();
Stream<String> s = ...;
int notEmptyStrings = s.filter(notEmpty).count();
Or, if having a collection or array, just use a for-loop which is simple, has less overhead, and *might be **faster:
int notEmpty = 0;
for(String s : list) if(!s.isEmpty()) notEmpty++;
*If you want to know what is faster, then use JMH http://openjdk.java.net/projects/code-tools/jmh, and avoid hand benchmark code unless it avoids all JVM optimizations — see Java 8: performance of Streams vs Collections
**I am getting flak for suggesting that the for-loop technique is faster. It eliminates a stream creation, it eliminates using another method call (negative function for predicate), and it eliminates a temporary accumulator list/counter. So a few things that are saved by the last construct that might make it faster.
I do think it is simpler and nicer though, even if not faster. If the job calls for a hammer and a nail, don't bring in a chainsaw and glue! I know some of you take issue with that.
wish-list: I would like to see Java Stream functions evolve a bit now that Java users are more familiar with them. For example, the 'count' method in Stream could accept a Predicate so that this can be done directly like this:
Stream<String> s = ...;
int notEmptyStrings = s.count(it -> !it.isEmpty());
or
List<String> list = ...;
int notEmptyStrings = lists.count(it -> !it.isEmpty());
Predicate has methods and, or and negate.
However, String::isEmpty is not a Predicate, it's just a String -> Boolean lambda and it could still become anything, e.g. Function<String, Boolean>. Type inference is what needs to happen first. The filter method infers type implicitly. But if you negate it before passing it as an argument, it no longer happens. As #axtavt mentioned, explicit inference can be used as an ugly way:
s.filter(((Predicate<String>) String::isEmpty).negate()).count()
There are other ways advised in other answers, with static not method and lambda most likely being the best ideas. This concludes the tl;dr section.
However, if you want some deeper understanding of lambda type inference, I'd like to explain it a bit more to depth, using examples. Look at these and try to figure out what happens:
Object obj1 = String::isEmpty;
Predicate<String> p1 = s -> s.isEmpty();
Function<String, Boolean> f1 = String::isEmpty;
Object obj2 = p1;
Function<String, Boolean> f2 = (Function<String, Boolean>) obj2;
Function<String, Boolean> f3 = p1::test;
Predicate<Integer> p2 = s -> s.isEmpty();
Predicate<Integer> p3 = String::isEmpty;
obj1 doesn't compile - lambdas need to infer a functional interface (= with one abstract method)
p1 and f1 work just fine, each inferring a different type
obj2 casts a Predicate to Object - silly but valid
f2 fails at runtime - you cannot cast Predicate to Function, it's no longer about inference
f3 works - you call the predicate's method test that is defined by its lambda
p2 doesn't compile - Integer doesn't have isEmpty method
p3 doesn't compile either - there is no String::isEmpty static method with Integer argument
Building on other's answers and personal experience:
Predicate<String> blank = String::isEmpty;
content.stream()
.filter(blank.negate())
Another option is to utilize lambda casting in non-ambiguous contexts into one class:
public static class Lambdas {
public static <T> Predicate<T> as(Predicate<T> predicate){
return predicate;
}
public static <T> Consumer<T> as(Consumer<T> consumer){
return consumer;
}
public static <T> Supplier<T> as(Supplier<T> supplier){
return supplier;
}
public static <T, R> Function<T, R> as(Function<T, R> function){
return function;
}
}
... and then static import the utility class:
stream.filter(as(String::isEmpty).negate())
Shouldn't Predicate#negate be what you are looking for?
In this case u could use the org.apache.commons.lang3.StringUtilsand do
int nonEmptyStrings = s.filter(StringUtils::isNotEmpty).count();
I have written a complete utility class (inspired by Askar's proposal) that can take Java 8 lambda expression and turn them (if applicable) into any typed standard Java 8 lambda defined in the package java.util.function. You can for example do:
asPredicate(String::isEmpty).negate()
asBiPredicate(String::equals).negate()
Because there would be numerous ambiguities if all the static methods would be named just as(), I opted to call the method "as" followed by the returned type. This gives us full control of the lambda interpretation. Below is the first part of the (somewhat large) utility class revealing the pattern used.
Have a look at the complete class here (at gist).
public class FunctionCastUtil {
public static <T, U> BiConsumer<T, U> asBiConsumer(BiConsumer<T, U> biConsumer) {
return biConsumer;
}
public static <T, U, R> BiFunction<T, U, R> asBiFunction(BiFunction<T, U, R> biFunction) {
return biFunction;
}
public static <T> BinaryOperator<T> asBinaryOperator(BinaryOperator<T> binaryOperator) {
return binaryOperator;
}
... and so on...
}
You can use Predicates from Eclipse Collections
MutableList<String> strings = Lists.mutable.empty();
int nonEmptyStrings = strings.count(Predicates.not(String::isEmpty));
If you can't change the strings from List:
List<String> strings = new ArrayList<>();
int nonEmptyStrings = ListAdapter.adapt(strings).count(Predicates.not(String::isEmpty));
If you only need a negation of String.isEmpty() you can also use StringPredicates.notEmpty().
Note: I am a contributor to Eclipse Collections.
You can accomplish this as long emptyStrings = s.filter(s->!s.isEmpty()).count();
Tip: to negate a collection.stream().anyMatch(...), one can use collection.stream().noneMatch(...)
If you're using Spring Boot (2.0.0+) you can use:
import org.springframework.util.StringUtils;
...
.filter(StringUtils::hasLength)
...
Which does:
return (str != null && !str.isEmpty());
So it will have the required negation effect for isEmpty
Related
In Java 8, there are many functional interfaces provided such as UnaryOperator, BinaryOperator and Function etc.
The Code,
UnaryOperator<Integer> uOp = (Integer i) -> i * 10;
BinaryOperator<Integer> bOp = (Integer i1, Integer i2) -> i1 * i2 * 10;
can always be written using Function as below,
Function<Integer, Integer> f1 = (Integer i) -> i * 10;
BiFunction<Integer, Integer, Integer> f2 = (Integer i1, Integer i2) -> i1 * i2 * 10;
So, whats the use of these operator interfaces ?
Are they achieving anything different than what can be achieved using Function ?
Functional interfaces should be specialised as possible.
Having
Function<Integer, Integer> f1 = (Integer i) -> i * 10;
Instead of:
UnaryOperator<Integer> uop1 = (Integer i) -> i * 10;
is actually a code smell (there is also Sonar Rule squid:S4276 for this).
The simple reason for this is that these interfaces were created to avoid passing unnecessary type parameters n times while you have only one.
public interface UnaryOperator<T> extends Function<T, T>
So writing a Function<T, T> is just longer and unnecessary.
Talking about other interfaces like: IntConsumer vs. Consumer<Integer> or DoubleToIntFunction vs. Function<Double, Integer> where the second option may lead to unnecessary auto-boxing and may downgrade performance.
So that's why using more specific and appropriate interface makes your code look cleaner and keeps you away from surprises.
They are here for your convenience. You can spare writing BiFunction<Integer, Integer, Integer> and just write/use a BinaryOperator<Integer> instead. An additional benefit: you can ensure that the function that is given to you accepts 1 or two parameters of the same type and return exactly that type without much more writing.
Additionally due to the nature of BinaryOperator<T> it makes more sense to put something like minBy and maxBy there, which doesn't really make so much sense to put into a BiFunction<T, U, R>. As the given parameters there are of same type and the return type is ensured also to be the same, a comparator can be easily applied... very convenient.
Yes, they're functionally identical. They even extend the classes you're talking about and use the same SAM. The UnaryOperator and BinaryOperator interfaces only define static methods.
public interface UnaryOperator<T> extends Function<T, T>
public interface BinaryOperator<T> extends BiFunction<T,T,T>
They're simply there for brevity. Why specify a type parameter 2 or 3 times when you can do it once?
UnaryOperator and BinaryOperator are shortcuts for Function and BiFunction when the type of return type is the same as the input type. I think that they also may carry different meannings, an operation and a function may have different intrepretations depending on your context this is essentially for code readablity and not for technical reasons.
I need to compose a stream operation with a predicate based on a boolean function. Found a workaround via rethrowing a method's argument as a predicate, as shown:
public <T> Predicate<T> pred(final Predicate<T> aLambda) {
return aLambda;
}
public List<String> foo() {
return new ArrayList<String>().stream() //of course, this does nothing, simplified
.filter(pred(String::isEmpty).negate())
.collect(Collectors.toList());
}
The 'pred' method seems to do nothing, however not this:
public List<String> foo() {
return new ArrayList<String>().stream()
.filter((String::isEmpty).negate())
.collect(Collectors.toList());
}
nor any in-line conversion:
public List<String> foo() {
return new ArrayList<String>().stream()
.filter(((Predicate)String::isEmpty).negate())
.collect(Collectors.toList());
}
seems to work. Fails with the error
The target type of this expression must be a functional interface
What the fancy conversion happens in the 'pred(...)' method?
You could write a utility method:
class PredicateUtils {
public static <T> Predicate<T> not(Predicate<T> predicate) {
return predicate.negate();
}
}
and use it as follows:
.filter(not(String::isEmpty))
I believe it's more readable than casting to a Predicate<T>:
.filter(((Predicate<String>)String::isEmpty).negate())
Though I would go with a simple lambda:
s -> !s.isEmpty()
What the fancy conversion happens in the pred(...) method?
You have specified a context - the type to work with. For instance, a String::isEmpty could be a Function<String, Boolean>, or Predicate<String>, or my #FunctionalInterface, or something else.
You clearly said that you were expecting a Predicate<T>, and you would return an instance of the Predicate<T>. The compiler is now able to figure out what the type you want to use.
You can use
((Predicate<String>) String::isEmpty).negate()
(note the use of the proper generic type)
or (preferred):
s -> !s.isEmpty()
which is way simpler and readable.
Your third version almost worked:
Arrays.<String>asList("Foo", "Bar", "").stream()
.filter(((Predicate<String>)String::isEmpty).negate())
.collect(Collectors.toList());
This seems to compile just fine.
In Java 8, how is a Function is defined to fit varargs.
we have a function like this:
private String doSomethingWithArray(String... a){
//// do something
return "";
}
And for some reason I need to call it using Java 8 function (because 'andThen' can be used along with other functions.)
And thus I wanted to define it something as given below.
Function<String... , String> doWork = a-> doSomethingWithArray(a) ;
That gives me compilation error.Following works, but input is now has to be an array and can not be a single string.
Function<String[] , String> doWork = a-> doSomethingWithArray(a) ;
Here I mentioned String, but it can be an array of any Object.
Is there a way to use varargs(...)instead of array([]) as input parameter?
Or if I create a new interface similar to Function, is it possible to create something like below?
#FunctionalInterface
interface MyFunction<T... , R> {
//..
}
You cannot use the varargs syntax in this case as it's not a method parameter.
Depending on what you're using the Function type for, you may not even need it at all and you can just work with your methods as they are without having to reference them through functional interfaces.
As an alternative you can define your own functional interface like this:
#FunctionalInterface
public interface MyFunctionalInterface<T, R> {
R apply(T... args);
}
then your declaration becomes:
MyFunctionalInterface<String, String> doWork = a -> doSomethingWithArray(a);
and calling doWork can now be:
String one = doWork.apply("one");
String two = doWork.apply("one","two");
String three = doWork.apply("one","two","three");
...
...
note - the functional interface name is just a placeholder and can be improved to be consistent with the Java naming convention for functional interfaces e.g. VarArgFunction or something of that ilk.
Because arrays and varargs are override-equivalent, the following is possible:
#FunctionalInterface
interface VarArgsFunction<T, U> extends Function<T[], U> {
#Override
U apply(T... args);
}
// elsewhere
VarArgsFunction<String, String> toString =
args -> Arrays.toString(args);
String str = toString.apply("a", "b", "c");
// and we could pass it to somewhere expecting
// a Function<String[], String>
That said, this has a pitfall having to do with invoking the method generically. The following throws a ClassCastException:
static void invokeApply() {
VarArgsFunction<Double, List<Double>> fn =
Arrays::asList;
List<Double> list = invokeApply(fn, 1.0, 2.0, 3.0);
}
static <T, U> U invokeApply(VarArgsFunction<T, U> fn,
T arg0, T arg1, T arg2) {
return fn.apply(arg0, arg1, arg2); // throws an exception
}
(Example in action.)
This happens because of type erasure: invoking the apply method generically creates an array whose component type is the erasure of the type variable T. In the above example, since the erasure of the type variable T is Object, it creates and passes an Object[] array to the apply method which is expecting a Double[].
Overriding the apply method with generic varargs (and more generally writing any generic varargs method) will generate a warning and that's why. (The warning is mandated in 8.4.1 of the JLS.)
Because of that, I don't actually recommend using this. I've posted it because, well, it's interesting, it does work in simpler cases and I wanted to explain why it probably shouldn't be used.
One safe way to target a varargs method to a strongly typed Function is by using a technique called currying.
For example, if you need to target your varargs method with 3 arguments, you could do it as follows:
Function<String, Function<String, Function<String, String>>> doWork =
a1 -> a2 -> a3 -> doSomethingWithArray(a1, a2, a3);
Then, wherever you need to call the function:
String result = doWork.apply("a").apply("b").apply("c");
This technique works to target not only varargs methods, but also any method with any number of arguments of different types.
If you already have an array with the arguments, just use a Function<String[], String>:
Function<String[], String> doWork = a -> doSomethingWithArray(a);
And then:
String[] args = {"a", "b", "c"};
String result = doWork.apply(args);
So, whenever you have a fixed number of arguments, use currying. And whenever you have dynamic arguments (represented by an array), use this last approach.
Short answer
This doesn't seem possible. Function interface has only four methods, and none of those methods takes vararg arguments.
Extend Function interface?
Doesn't work either. Since arrays are somewhat strange low-level constructs in Java, they do not work well with generic types because of type erasure. In particular, it is not possible to create an array of generic type without contaminating your entire codebase with Class<X>-reflection-thingies. Therefore, it's not even feasible to extend the Function<X, Y> interface with a default method which takes varargs and redirects to apply.
Syntax for array creation, helper methods
If you statically know the type of the arguments, then the best thing you can do is to use the inline syntax for array creation:
myFunction.apply(new KnownType[]{x, y, z});
instead of the varargs, which you want:
myFunction.apply(x, y, z); // doesn't work this way
If this is too long, you could define a helper function for creation of
arrays of KnownType from varargs:
// "known type array"
static KnownType[] kta(KnownType... xs) {
return xs;
}
and then use it as follows:
myFunction.apply(kta(x, y, z, w))
which would at least be somewhat easier to type and to read.
Nested methods, real varargs
If you really (I mean, really) want to pass arguments of known type to a black-box generic Function using the vararg-syntax, then you need something like nested methods. So, for example, if you want to have this:
myHigherOrderFunction(Function<X[], Y> blah) {
X x1 = ... // whatever
X x2 = ... // more `X`s
blah(x1, x2) // call to vararg, does not work like this!
}
you could use classes to emulate nested functions:
import java.util.function.*;
class FunctionToVararg {
public static double foo(Function<int[], Double> f) {
// suppose we REALLY want to use a vararg-version
// of `f` here, for example because we have to
// use it thousand times, and inline array
// syntax would be extremely annoying.
// We can use inner nested classes.
// All we really need is one method of the
// nested class, in this case.
class Helper {
// The inner usage takes array,
// but `fVararg` takes varargs!
double fVararg(int... xs) {
return f.apply(xs);
}
double solveTheActualProblem() {
// hundreds and hundreds of lines
// of code with dozens of invokations
// of `fVararg`, otherwise it won't pay off
// ...
double blah = fVararg(40, 41, 43, 44);
return blah;
}
}
return (new Helper()).solveTheActualProblem();
}
public static void main(String[] args) {
Function<int[], Double> example = ints -> {
double d = 0.0;
for (int i: ints) d += i;
return d / ints.length;
};
System.out.println(foo(example)); // should give `42`
}
}
As you see, that's a lot of pain. Is it really worth it?
Conclusion
Overall, this seems to be an idea which would be extremely painful to implement in Java, no matter what you do. At least I don't see any simple solutions. To be honest, I also don't see where it would be really necessary (maybe it's just me vs. the BLUB-paradox).
Unfortunately, adding a method to intercede and do the translation for you was all I could come up with.
public class FunctionalTest {
public static void main( String[] args ) {
kludge( "a","b","c" );
}
private static Function<String[],PrintStream> ref = a -> System.out.printf( "", a );
public static void kludge( String... y ) {
ref.apply( y );
}
}
Given the following code:
stream.filter(o1 -> Objects.equals(o1.getSome().getSomeOther(),
o2.getSome().getSomeOther())
How could that possibly be simplified?
Is there some equals-utility that lets you first extract a key just like there is Comparator.comparing which accepts a key extractor function?
Note that the code itself (getSome().getSomeOther()) is actually generated from a schema.
EDIT: (after discussing with a collegue and after revisiting: Is there a convenience method to create a Predicate that tests if a field equals a given value?)
We now have come to the following reusable functional interface:
#FunctionalInterface
public interface Property<T, P> {
P extract(T object);
default Predicate<T> like(T example) {
Predicate<P> equality = Predicate.isEqual(extract(example));
return (value) -> equality.test(extract(value));
}
}
and the following static convenience method:
static <T, P> Property<T, P> property(Property<T, P> property) {
return property;
}
The filtering now looks like:
stream.filter(property(t -> t.getSome().getSomeOther()).like(o2))
What I like on this solution in respect to the solution before: it clearly separates the extraction of the property and the creation of the Predicate itself and it states more clearly what is going on.
Previous solution:
<T, U> Predicate<T> isEqual(T other, Function<T, U> keyExtractFunction) {
U otherKey = keyExtractFunction.apply(other);
return t -> Objects.equals(keyExtractFunction.apply(t), otherKey);
}
which results in the following usage:
stream.filter(isEqual(o2, t -> t.getSome().getSomeOther())
but I am more then happy if anyone has a better solution.
I think that your question's approach is more readable than your answer's one. And I also think that using inline lambdas is fine, as long as the lambda is simple and short.
However, for maintainance, readability, debugging and testability reasons, I always move the logic I'd use in a lambda (either a predicate or function) to one or more methods. In your case, I would do:
class YourObject {
private Some some;
public boolean matchesSomeOther(YourObject o2) {
return this.getSome().matchesSomeOther(o2.getSome());
}
}
class Some {
private SomeOther someOther;
public boolean matchesSomeOther(Some some2) {
return Objects.isEqual(this.getSomeOther(), some2.getSomeOther());
}
}
With these methods in place, your predicate now becomes trivial:
YourClass o2 = ...;
stream.filter(o2::matchesSomeOther)
I'm using Java 8 along with the Pair from Apache Commons Lang3.
The first thing I am trying to do is get a stream from a List<T> and to take a Function<T,U> and ultimately create a List<Pair<T,U>>. Currently I am creating a Function<T,Pair<T,U>> with the specific types I want and using this to map the stream. What I want is something like:
public static <T, U> Function<T, Pair<T, U>> tupledResult(final Function<T, U> generator) {
Objects.requireNonNull(generator);
return (T t) -> new ImmutablePair<>(t, generator.apply(t));
}
The next problem is that now that I have a List<Pair<T, U>> I want to be able to use foreach to apply a BiConsumer<T,U> (similar to the tupled function in Scala). I guess it would look like:
public static <T, U> Consumer<Pair<T, U>> tupled(final BiConsumer<T, U> consumer) {
Objects.requireNonNull(consumer);
return (final Pair<T, U> p) -> consumer.accept(p.getLeft(), p.getRight());
}
Is there anything in Apache Commons Lang3 that does this or should I roll my own? If the later, is this something that is useful to contribute or is this a bad solution?
Example
This is the sort of thing I want to achieve:
private void doStuff(final List<Thing> things) {
final List<Pair<Thing, Other>> otherThings = things.stream()
.map(tupledResult(ThingHelper::generateOther))
.collect(Collectors.toList());
otherThings.stream().forEach(tupled((final Thing thing, final Other other) -> {
doSomething(thing, other);
}));
otherThings.stream().forEach(tupled((final Thing thing, final Other other) -> {
doSomethingElse(thing, other);
}));
}
The points here are that ThingHelper.generateOther is relatively expensive and I only want to do it once. Also doSomething must be applied to everything first and then doSomethingElse afterwards.
The pairs never leave the scope of this method nor do I want to overload the methods to take a pair. In this case I really don't care about the lack of semantics of the pair, all that matters is the order. doSomething and doSomethingElse are the ones providing the semantics.
Such methods are absent in Apache Commons Lang3 as this library is Java 6 compatible, but the methods you want must return objects of java.util.function package which appeared only in Java 8.
If your Thing objects are not repeating, it's quite natural in your case to use Map instead:
private void doStuff(final List<Thing> things) {
final Map<Thing, Other> otherThings = things.stream()
.collect(Collectors.toMap(Function.identity(), ThingHelper::generateOther));
otherThings.forEach((final Thing thing, final Other other) -> {
doSomething(thing, other);
});
otherThings.forEach((final Thing thing, final Other other) -> {
doSomethingElse(thing, other);
});
}
Or even shorter:
private void doStuff(List<Thing> things) {
Map<Thing, Other> otherThings = things.stream()
.collect(toMap(x -> x, ThingHelper::generateOther));
otherThings.forEach(this::doSomething);
otherThings.forEach(this::doSomethingElse);
}
This way you don't need wrappers as Map.forEach already accepts BiConsumer and Collectors.toMap second parameter essentially replaces your tupledResult.
In FunctionalJava (https://github.com/functionaljava/functionaljava) I would do:
private void doStuff(final List<Thing> things) {
fj.data.List<P2<Thing, Other>> otherThings = fj.data.List.list(things)
.map(t -> P.p(t, ThingHelper.generateOther.apply(t)));
otherThings.forEachDoEffect(p -> doSomething(p._1(), p._2()));
otherThings.forEachDoEffect(p -> doSomethingElse(p._1(), p._2()));
}
Tuples are supported as products with classes P, P1, P2, etc (https://functionaljava.ci.cloudbees.com/job/master/javadoc/).