Related
Here is an overview of the Java code I have:
// An interface and an implementation class:
public interface MyInterface<T1, T2> { ... }
public class MyImplementation implements MyInterface<int, String> { ... }
// Another class
public class MyClass<T3, T4> { ... }
// Function I want to call
void doStuff(MyInterface i) {
MyClass<int, String> n;
}
// I want to call the function like this:
MyInterface mi = new MyImplementation();
doStuff(mi);
What I can't figure out is if I can get MyClass<int, String> n; to somehow use the generic types from the MyImplementation class passed in to doStuff()? In this case, n would automatically use <int, String> because that's what MyImplementation uses.
Yes, you can.
Let's move away from nebulous hypotheticals and take real classes: Collection<T>, Map<K, V>, and Function<F, T>. Let's say you want to write a method in the Map type (or interface, doesn't matter, a signature is a signature) that takes a 'key converter' (a thing that converts Ks into something else), returning a collection of the something-else, which consists of each key in the map, thrown through the converter, and added to a collection.
class MapImpl<K, V> implements Map<K, V> {
public <T> Collection<T> convertKeys(Function<K, T> converter) {
List<T> out = new ArrayList<T>();
for (K key : keySet()) out.add(converter.apply(key));
return out;
}
}
A lot of concepts are being used here:
The implementation doesn't lock in the types of K and V. You don't just inherit typevars from interfaces you implement, so, MapImpl gets its own K,V which are also used as the K,V for the interface. That covers line 1.
The convertKeys method introduces its own unique typevar, in addition to the K,V it already gets. That's because.. well, that's how the method works: The map's keys have some type, the values have some other type, and this converter converts to some third type. Three types: K, V, and T. A method can introduce new vars just for the method, that's what the <T> is all about in line 2.
Any time you name a type name, if that type is generified, you MUST toss <> after it and put in appropriate things. Or don't put in appropriate things which means: Hey, compiler, figure it out if you can (the so called diamond operator). In your snippet, you use MyInterface i as method param type and that's bad: MyInterface has generics, so it must have <> behind it. In this case, you have to add things because there is no way the compiler can try to figure things out.
Going back to your code, it might look like:
public <K, V> void doStuff(MyInterface<K, V> i) {
MyClass<K, V> n;
}
NB: Remember, generics link things. That final snippet is simply saying: There is a link between the first typearg of the MyInterface part of the 'i' parameter's type, and the first typearg of the MyClass part of the 'n' local variable. I don't know what that type is. I do know it is the same type. Generics are completely useless unless the typevar occurs in 2 or more places.
NB2: If you then want to get real fancy, you start thinking about co/contra/invariance. For example, in the key converter story, if you have a converter that can convert any object into something else that'd be cool too. In fact, a converter that can convert either Ks, or any supertype of Ks, that'd all be suitable. So, really, you end up with: public <T> Collection<T> convertKeys(Function<? super K, ? extends T> converter) {} - but that kind of advanced variance engineering is a nice bonus, feel free to skip those bits in your source until you run into trouble because you didn't take it into consideration.
I'm little confused about how the generics works? I'm learning about function API in java and there I just test Function interface and got confused about compose method that how the generics is working in compose method.
Reading the generics on the java official tutorial website I realize that if we have any generic type in the method return or parameters we have to declare that type in the signature of method as explained below.
Here is the method I read in official docs tutorial.
public static <K, V> boolean compare(Pair<K, V> p1, Pair<K, V> p2) {
return p1.getKey().equals(p2.getKey()) &&
p1.getValue().equals(p2.getValue());
}
Above method have two types, K, V which are declared in the signature after the static keyword as but when I read java Function API there is one method called compose and the signature of the compose is as
default <V> Function<V, R> compose(Function<? super V, ? extends T> before) {
Objects.requireNonNull(before);
return (V v) -> apply(before.apply(v));
}
1) The first question where is the T & R declared? which are being used in the return type and in the parameter. Or my understanding is wrong?
Then I read more in generics tutorials and then I try to understand the concept of super and extends in generics and read here then I test compose method more and then confused again about how the super and extends works in the compose method?
public static void main(String... args){
Function<Integer, String> one = (i) -> i.toString();
Function<String, Integer> two = (i) -> Integer.parseInt(i);
one.compose(two);
}
As above I have declared two Function with lamdas. One is having Integer input and String output the other one is reversed from it.
2) The second question is that how Integer and String are related to extends and super? There is no relation between String and Integer class no one is extending each other then how it is working?
I tried my best to explain my question/problem. Let me know what you didn't understand I will try again.
Where are T and R defined?
Remember, compose is declared in the Function interface. It can not only use generic parameters of its own, but also the type's generic parameters. R and T are declared in the interface declaration:
interface Function<T, R> {
...
}
What are ? extends and ? super?
? is wildcard. It means that the generic parameter can be anything. extends and super give constraints to the wildcard. ? super V means that whatever ? is, it must be a superclass of V or V itself. ? extends T means that whatever ? is, it must be a subclass of T or T itself.
Now let's look at this:
Function<Integer, String> one = (i) -> i.toString();
Function<String, Integer> two = (i) -> Integer.parseInt(i);
one.compose(two);
From this, we can deduce that T is Integer and R is String. What is V? V must be some type such that the constraints Function<? super V, ? extends T> is satisfied.
We can do this by substituting the argument we passed in - Function<String, Integer> - to get String super V and Integer extends Integer.
The second constraint is satisfied already while the first constraint now says that String must be a super class of V or String itself. String cannot have subclasses so V must be String.
Hence, you can write something like:
Function<String, String> f = one.compose(two);
but not
Function<Integer, String> f = one.compose(two);
When you compose a Function<Integer, String> and a Function<String, Integer> you cannot possibly get a Function<Integer, String>. If you try to do this, V is automatically inferred to be Integer. But String super Integer is not satisfied, so the compilation fails. See the use of the constraints now? It is to avoid programmers writing things that don't make sense. Another use of the constraints is to allow you to do something like this:
Function<A, B> one = ...
Function<C, SubclassOfB> two = ...
Function<SubclassOfC, B> f = one.compose(two);
There is no relationship between Integer and String in this case, it's all about V.
1) The compose function is part of Interface Function<T,R>. As you can see in documentation for this interface:
Type Parameters:
T - the type of the input to the function
R - the type of the result of the function
2) The super and extends constraints in questions aren't applied to T & R, they're applied to the generic type parameters of a function that you pass in as an argument to the compose function.
Basically this means that if you have:
Function<ClassA, ClassB> one;
Function<SomeSuperClassOfC, SomeSubclassOfA> two;
then it's valid to call
Function<ClassC, ClassB> three = one.compose(two)
I will try to explain from zero;
interface Function<T, R> - this is interface with one method, which must be implemented R apply (T);
in Java prior to 8 we must write:
Function<Integer, String> one = new Function<Integer, String>() {
#Override
public String apply(Integer i) {
return i.toString();
}
};
now you can use it:
String resultApply = one.apply(5);
now, I think, you get the idea.
Given the following code:
stream.filter(o1 -> Objects.equals(o1.getSome().getSomeOther(),
o2.getSome().getSomeOther())
How could that possibly be simplified?
Is there some equals-utility that lets you first extract a key just like there is Comparator.comparing which accepts a key extractor function?
Note that the code itself (getSome().getSomeOther()) is actually generated from a schema.
EDIT: (after discussing with a collegue and after revisiting: Is there a convenience method to create a Predicate that tests if a field equals a given value?)
We now have come to the following reusable functional interface:
#FunctionalInterface
public interface Property<T, P> {
P extract(T object);
default Predicate<T> like(T example) {
Predicate<P> equality = Predicate.isEqual(extract(example));
return (value) -> equality.test(extract(value));
}
}
and the following static convenience method:
static <T, P> Property<T, P> property(Property<T, P> property) {
return property;
}
The filtering now looks like:
stream.filter(property(t -> t.getSome().getSomeOther()).like(o2))
What I like on this solution in respect to the solution before: it clearly separates the extraction of the property and the creation of the Predicate itself and it states more clearly what is going on.
Previous solution:
<T, U> Predicate<T> isEqual(T other, Function<T, U> keyExtractFunction) {
U otherKey = keyExtractFunction.apply(other);
return t -> Objects.equals(keyExtractFunction.apply(t), otherKey);
}
which results in the following usage:
stream.filter(isEqual(o2, t -> t.getSome().getSomeOther())
but I am more then happy if anyone has a better solution.
I think that your question's approach is more readable than your answer's one. And I also think that using inline lambdas is fine, as long as the lambda is simple and short.
However, for maintainance, readability, debugging and testability reasons, I always move the logic I'd use in a lambda (either a predicate or function) to one or more methods. In your case, I would do:
class YourObject {
private Some some;
public boolean matchesSomeOther(YourObject o2) {
return this.getSome().matchesSomeOther(o2.getSome());
}
}
class Some {
private SomeOther someOther;
public boolean matchesSomeOther(Some some2) {
return Objects.isEqual(this.getSomeOther(), some2.getSomeOther());
}
}
With these methods in place, your predicate now becomes trivial:
YourClass o2 = ...;
stream.filter(o2::matchesSomeOther)
I'm using Java 8 along with the Pair from Apache Commons Lang3.
The first thing I am trying to do is get a stream from a List<T> and to take a Function<T,U> and ultimately create a List<Pair<T,U>>. Currently I am creating a Function<T,Pair<T,U>> with the specific types I want and using this to map the stream. What I want is something like:
public static <T, U> Function<T, Pair<T, U>> tupledResult(final Function<T, U> generator) {
Objects.requireNonNull(generator);
return (T t) -> new ImmutablePair<>(t, generator.apply(t));
}
The next problem is that now that I have a List<Pair<T, U>> I want to be able to use foreach to apply a BiConsumer<T,U> (similar to the tupled function in Scala). I guess it would look like:
public static <T, U> Consumer<Pair<T, U>> tupled(final BiConsumer<T, U> consumer) {
Objects.requireNonNull(consumer);
return (final Pair<T, U> p) -> consumer.accept(p.getLeft(), p.getRight());
}
Is there anything in Apache Commons Lang3 that does this or should I roll my own? If the later, is this something that is useful to contribute or is this a bad solution?
Example
This is the sort of thing I want to achieve:
private void doStuff(final List<Thing> things) {
final List<Pair<Thing, Other>> otherThings = things.stream()
.map(tupledResult(ThingHelper::generateOther))
.collect(Collectors.toList());
otherThings.stream().forEach(tupled((final Thing thing, final Other other) -> {
doSomething(thing, other);
}));
otherThings.stream().forEach(tupled((final Thing thing, final Other other) -> {
doSomethingElse(thing, other);
}));
}
The points here are that ThingHelper.generateOther is relatively expensive and I only want to do it once. Also doSomething must be applied to everything first and then doSomethingElse afterwards.
The pairs never leave the scope of this method nor do I want to overload the methods to take a pair. In this case I really don't care about the lack of semantics of the pair, all that matters is the order. doSomething and doSomethingElse are the ones providing the semantics.
Such methods are absent in Apache Commons Lang3 as this library is Java 6 compatible, but the methods you want must return objects of java.util.function package which appeared only in Java 8.
If your Thing objects are not repeating, it's quite natural in your case to use Map instead:
private void doStuff(final List<Thing> things) {
final Map<Thing, Other> otherThings = things.stream()
.collect(Collectors.toMap(Function.identity(), ThingHelper::generateOther));
otherThings.forEach((final Thing thing, final Other other) -> {
doSomething(thing, other);
});
otherThings.forEach((final Thing thing, final Other other) -> {
doSomethingElse(thing, other);
});
}
Or even shorter:
private void doStuff(List<Thing> things) {
Map<Thing, Other> otherThings = things.stream()
.collect(toMap(x -> x, ThingHelper::generateOther));
otherThings.forEach(this::doSomething);
otherThings.forEach(this::doSomethingElse);
}
This way you don't need wrappers as Map.forEach already accepts BiConsumer and Collectors.toMap second parameter essentially replaces your tupledResult.
In FunctionalJava (https://github.com/functionaljava/functionaljava) I would do:
private void doStuff(final List<Thing> things) {
fj.data.List<P2<Thing, Other>> otherThings = fj.data.List.list(things)
.map(t -> P.p(t, ThingHelper.generateOther.apply(t)));
otherThings.forEachDoEffect(p -> doSomething(p._1(), p._2()));
otherThings.forEachDoEffect(p -> doSomethingElse(p._1(), p._2()));
}
Tuples are supported as products with classes P, P1, P2, etc (https://functionaljava.ci.cloudbees.com/job/master/javadoc/).
In Java 8, you can use a method reference to filter a stream, for example:
Stream<String> s = ...;
long emptyStrings = s.filter(String::isEmpty).count();
Is there a way to create a method reference that is the negation of an existing one, i.e. something like:
long nonEmptyStrings = s.filter(not(String::isEmpty)).count();
I could create the not method like below but I was wondering if the JDK offered something similar.
static <T> Predicate<T> not(Predicate<T> p) { return o -> !p.test(o); }
Predicate.not( … )
java-11 offers a new method Predicate#not
So you can negate the method reference:
Stream<String> s = ...;
long nonEmptyStrings = s.filter(Predicate.not(String::isEmpty)).count();
I'm planning to static import the following to allow for the method reference to be used inline:
public static <T> Predicate<T> not(Predicate<T> t) {
return t.negate();
}
e.g.
Stream<String> s = ...;
long nonEmptyStrings = s.filter(not(String::isEmpty)).count();
Update: Starting from Java-11, the JDK offers a similar solution built-in as well.
There is a way to compose a method reference that is the opposite of a current method reference. See #vlasec's answer below that shows how by explicitly casting the method reference to a Predicate and then converting it using the negate function. That is one way among a few other not too troublesome ways to do it.
The opposite of this:
Stream<String> s = ...;
int emptyStrings = s.filter(String::isEmpty).count();
is this:
Stream<String> s = ...;
int notEmptyStrings = s.filter(((Predicate<String>) String::isEmpty).negate()).count()
or this:
Stream<String> s = ...;
int notEmptyStrings = s.filter( it -> !it.isEmpty() ).count();
Personally, I prefer the later technique because I find it clearer to read it -> !it.isEmpty() than a long verbose explicit cast and then negate.
One could also make a predicate and reuse it:
Predicate<String> notEmpty = (String it) -> !it.isEmpty();
Stream<String> s = ...;
int notEmptyStrings = s.filter(notEmpty).count();
Or, if having a collection or array, just use a for-loop which is simple, has less overhead, and *might be **faster:
int notEmpty = 0;
for(String s : list) if(!s.isEmpty()) notEmpty++;
*If you want to know what is faster, then use JMH http://openjdk.java.net/projects/code-tools/jmh, and avoid hand benchmark code unless it avoids all JVM optimizations — see Java 8: performance of Streams vs Collections
**I am getting flak for suggesting that the for-loop technique is faster. It eliminates a stream creation, it eliminates using another method call (negative function for predicate), and it eliminates a temporary accumulator list/counter. So a few things that are saved by the last construct that might make it faster.
I do think it is simpler and nicer though, even if not faster. If the job calls for a hammer and a nail, don't bring in a chainsaw and glue! I know some of you take issue with that.
wish-list: I would like to see Java Stream functions evolve a bit now that Java users are more familiar with them. For example, the 'count' method in Stream could accept a Predicate so that this can be done directly like this:
Stream<String> s = ...;
int notEmptyStrings = s.count(it -> !it.isEmpty());
or
List<String> list = ...;
int notEmptyStrings = lists.count(it -> !it.isEmpty());
Predicate has methods and, or and negate.
However, String::isEmpty is not a Predicate, it's just a String -> Boolean lambda and it could still become anything, e.g. Function<String, Boolean>. Type inference is what needs to happen first. The filter method infers type implicitly. But if you negate it before passing it as an argument, it no longer happens. As #axtavt mentioned, explicit inference can be used as an ugly way:
s.filter(((Predicate<String>) String::isEmpty).negate()).count()
There are other ways advised in other answers, with static not method and lambda most likely being the best ideas. This concludes the tl;dr section.
However, if you want some deeper understanding of lambda type inference, I'd like to explain it a bit more to depth, using examples. Look at these and try to figure out what happens:
Object obj1 = String::isEmpty;
Predicate<String> p1 = s -> s.isEmpty();
Function<String, Boolean> f1 = String::isEmpty;
Object obj2 = p1;
Function<String, Boolean> f2 = (Function<String, Boolean>) obj2;
Function<String, Boolean> f3 = p1::test;
Predicate<Integer> p2 = s -> s.isEmpty();
Predicate<Integer> p3 = String::isEmpty;
obj1 doesn't compile - lambdas need to infer a functional interface (= with one abstract method)
p1 and f1 work just fine, each inferring a different type
obj2 casts a Predicate to Object - silly but valid
f2 fails at runtime - you cannot cast Predicate to Function, it's no longer about inference
f3 works - you call the predicate's method test that is defined by its lambda
p2 doesn't compile - Integer doesn't have isEmpty method
p3 doesn't compile either - there is no String::isEmpty static method with Integer argument
Building on other's answers and personal experience:
Predicate<String> blank = String::isEmpty;
content.stream()
.filter(blank.negate())
Another option is to utilize lambda casting in non-ambiguous contexts into one class:
public static class Lambdas {
public static <T> Predicate<T> as(Predicate<T> predicate){
return predicate;
}
public static <T> Consumer<T> as(Consumer<T> consumer){
return consumer;
}
public static <T> Supplier<T> as(Supplier<T> supplier){
return supplier;
}
public static <T, R> Function<T, R> as(Function<T, R> function){
return function;
}
}
... and then static import the utility class:
stream.filter(as(String::isEmpty).negate())
Shouldn't Predicate#negate be what you are looking for?
In this case u could use the org.apache.commons.lang3.StringUtilsand do
int nonEmptyStrings = s.filter(StringUtils::isNotEmpty).count();
I have written a complete utility class (inspired by Askar's proposal) that can take Java 8 lambda expression and turn them (if applicable) into any typed standard Java 8 lambda defined in the package java.util.function. You can for example do:
asPredicate(String::isEmpty).negate()
asBiPredicate(String::equals).negate()
Because there would be numerous ambiguities if all the static methods would be named just as(), I opted to call the method "as" followed by the returned type. This gives us full control of the lambda interpretation. Below is the first part of the (somewhat large) utility class revealing the pattern used.
Have a look at the complete class here (at gist).
public class FunctionCastUtil {
public static <T, U> BiConsumer<T, U> asBiConsumer(BiConsumer<T, U> biConsumer) {
return biConsumer;
}
public static <T, U, R> BiFunction<T, U, R> asBiFunction(BiFunction<T, U, R> biFunction) {
return biFunction;
}
public static <T> BinaryOperator<T> asBinaryOperator(BinaryOperator<T> binaryOperator) {
return binaryOperator;
}
... and so on...
}
You can use Predicates from Eclipse Collections
MutableList<String> strings = Lists.mutable.empty();
int nonEmptyStrings = strings.count(Predicates.not(String::isEmpty));
If you can't change the strings from List:
List<String> strings = new ArrayList<>();
int nonEmptyStrings = ListAdapter.adapt(strings).count(Predicates.not(String::isEmpty));
If you only need a negation of String.isEmpty() you can also use StringPredicates.notEmpty().
Note: I am a contributor to Eclipse Collections.
You can accomplish this as long emptyStrings = s.filter(s->!s.isEmpty()).count();
Tip: to negate a collection.stream().anyMatch(...), one can use collection.stream().noneMatch(...)
If you're using Spring Boot (2.0.0+) you can use:
import org.springframework.util.StringUtils;
...
.filter(StringUtils::hasLength)
...
Which does:
return (str != null && !str.isEmpty());
So it will have the required negation effect for isEmpty