What are get() and unit() in this definition of applicative functor? - java

I am trying to code up some functors, monads, and applicatives in java. i found a few and picked the one below.
In terms category theory, what is get() returning?
The unit() seems like some kind of identity, but from what to what? Or perhaps this is a constructor?
I saw one definition of functor that had a get(). What would this be returning?
abstract class Functor6<F,T> {
protected abstract <U> Function<? extends Functor6<F,T>,? extends Functor6<?,U>> fmap(Function<T,U> f);
}
abstract class Applicative<F,T>extends Functor6<F,T> {
public abstract <U> U get(); // what is this in terms of category theory?
protected abstract <U> Applicative<?,U> unit(U value); // what is this in terms of category theory?
protected final <U> Function<Applicative<F,T>,Applicative<?,U>> apply(final Applicative<Function<T,U>,U> ff) {
return new Function<Applicative<F,T>,Applicative<?,U>>() {
public Applicative<?,U> apply(Applicative<F,T> ft) {
Function<T,U> f=ff.get();
T t=ft.get();
return unit(f.apply(t));
}
};
}
}

Some Haskell may help. Firstly, a functor:
class Functor f where
fmap :: (a -> b) -> f a -> f b
We can read this as saying if a type f is a Functor then there must have an fmap function, which takes a function of type a -> b and a value oif type f a to yield an f b. I.e. the type allows the function to be applied to the value within it.
Java doesn't have support for higher-kinded types, which would be required to define a functor type as above, so instead we have to approximate it:
interface Functor6<F, T> {
<U> Function<? extends Functor6<F, T>, ? extends Functor6<?, U>> fmap(Function<T, U> f);
}
Here the generic type parameter F is the Functor type, equivalent to f in the Haskell definition, and T is the contained type (as is U), equivalent to a (and b) in the Haskell definition. In the absence of HKTs we have to use wildcard to refer to the functor type (? extends Functor6<F, T>).
Next, Applicative:
class (Functor f) => Applicative f where
pure :: a -> f a
<*> :: f (a -> b) -> f a -> f b
I.e., for a type f to be an applicative it must be a functor, have a pure operation which lifts a value into the applicative f, and an apply operation (<*>) which, given a function (a -> b) inside an f and an a inside an f, can apply the function to the value to yield a b inside an f.
Here's your Java equivalent, simplified using some Java 8 features, and with some types corrected:
interface Applicative<F, T> extends Functor6<F, T> {
T get();
<F, U> Applicative<F, U> unit(U value);
default <U> Function<Applicative<F, T>, Applicative<F, U>> apply(Applicative<?, Function<T, U>> ff) {
return ft -> {
Function<T, U> f = ff.get();
T t = ft.get();
return unit(f.apply(t));
};
}
}
If we take it line by line:
interface Applicative<F, T> extends Functor6<F, T> {
says an Applicative is a Functor. This:
T get();
appears to be a means of getting access to the value within the applicative. This may work for specific cases, but does not in general work. This:
<F, U> Applicative<F, U> unit(U value);
is supposed to be equivalent to the pure function in the Haskell definition. It ought to be static, otherwise you need an applicative value in order to be able to call it, however making it static would prevent it from being overridden in the actual applicative implementation. There is no simple way to solve this in Java.
Then you have the apply method, which by rights is supposed to be equivalent to <*> in Haskell. As we can see it just gets the function f out of the applicative and then its argument, and returns an applicative containing the result of applying the function to the argument.
This is where the wheels really come off. The details of how an applicative applies a function to a value is specific to each applicative and can't be generalised like this.
In short, this approach is wrong. That's not to say you can't implement applicative functors in Java - you can and it's quite easy, but what you can't do is state within the language that they are applicatives, as you can do in Haskell (using type classes).

Related

What would a compose method in the BiFunction interface look like?

The Function interface has the compose() and andThen() methods while the BiFunction interface only has the andThen() method. My question is simply how could the corresponding method be implemented? I'll try to represent this graphically.
The single letters are parameterized types as defined by Java's Function and BiFunction interfaces. Arrows are the flow of inputs and outputs. Boxes with connected arrows are functions. The dotted box just shows how the apply method is used.
The Function's Compose() and andThen() methods are straightforward since a Function has one input and one output and therefore can only be strung sequentially with another in two ways.
Since a BiFunction has one output, the "after" function has to be something with only one corresponding input, and Function fits the bill. And since it has two inputs, the "before" function needs to be something with two outputs? You can't have a method return two things, so there seemingly can't be a "before". The return type of each of these methods is the same as the interface they are defined in, so the proposed method should return a BiFunction.
My proposal then is a method that takes two Functions as input and returns a BiFunction. I'm not sure what else it could even be. It couldn't be two BiFunctions because then the return type would have to be a QuaterFunction.
Here is the code as it would be written in the Java Library:
public interface BiFunction<T, U, R> {
// apply()...
default <V, W> BiFunction<V, W, R> compose(
Function<? super V, ? extends T> beforeLeft,
Function<? super W, ? extends U> beforeRight) {
Objects.requireNonNull(beforeLeft);
Objects.requireNonNull(beforeRight);
return (V v, W w) -> apply(beforeLeft.apply(v), beforeRight.apply(w));
}
// andThen()...
}
Here is the finished graph:
Here it is in use:
BiFunction<Integer, Integer, Integer> add = Integer::sum;
Function<Integer, Integer> abs = Math::abs;
BiFunction<Integer, Integer, Integer> addAbs = add.compose(abs, abs);
System.out.println(addAbs.apply(-2, -3));
// output: 5
If you want to actually test this, you can do something like this:
public interface BiFunctionWithCompose<T, U, R> extends BiFunction<T, U, R> {...
Or like this:
package myutil;
public interface BiFunction<T, U, R> extends java.util.function.BiFunction<T, U, R> {...
I have no idea if this will be useful to anyone, but it was really fun to think through and write. Have a wonderful day.

Java thenComparing wildcard signature

Why does the declaration look like this:
default <U extends Comparable<? super U>> Comparator<T> thenComparing(
Function<? super T, ? extends U> keyExtractor)
I understand most of it. It makes sense that U can be anything as long as it's comparable to a superclass of itself, and thus also comparable to itself.
But I don't get this part: Function<? super T, ? extends U>
Why not just have: Function<? super T, U>
Can't the U just parameterize to whatever the keyExtractor returns, and still extend Comparable<? super U> all the same?
Why is it ? extends U and not U?
Because of code conventions. Check out #deduper's answer for a great explanation.
Is there any actual difference?
When writing your code normally, your compiler will infer the correct T for things like Supplier<T> and Function<?, T>, so there is no practical reason to write Supplier<? extends T> or Function<?, ? extends T> when developing an API.
But what happens if we specify the type manually?
void test() {
Supplier<Integer> supplier = () -> 0;
this.strict(supplier); // OK (1)
this.fluent(supplier); // OK
this.<Number>strict(supplier); // compile error (2)
this.<Number>fluent(supplier); // OK (3)
}
<T> void strict(Supplier<T>) {}
<T> void fluent(Supplier<? extends T>) {}
As you can see, strict() works okay without explicit declaration because T is being inferred as Integer to match local variable's generic type.
Then it breaks when we try to pass Supplier<Integer> as Supplier<Number> because Integer and Number are not compatible.
And then it works with fluent() because ? extends Number and Integer are compatible.
In practice that can happen only if you have multiple generic types, need to explicitly specify one of them and get the other one incorrectly (Supplier one), for example:
void test() {
Supplier<Integer> supplier = () -> 0;
// If one wants to specify T, then they are forced to specify U as well:
System.out.println(this.<List<?>, Number> supplier);
// And if U happens to be incorrent, then the code won't compile.
}
<T, U> T method(Supplier<U> supplier);
Example with Comparator (original answer)
Consider the following Comparator.comparing method signature:
public static <T, U extends Comparable<? super U>> Comparator<T> comparing(
Function<? super T, U> keyExtractor
)
Also here is some test classes hierarchy:
class A implements Comparable<A> {
public int compareTo(A object) { return 0; }
}
class B extends A { }
Now let's try this:
Function<Object, B> keyExtractor = null;
Comparator.<Object, A>comparing(keyExtractor); // compile error
error: incompatible types: Function<Object,B> cannot be converted to Function<? super Object,A>
TL;DR:
Comparator.thenComparing(Function< ? super T, ? extends U > keyExtractor) (the method your question specifically asks about) might be declared that way as an idiomatic/house coding convention thing that the JDK development team is mandated to follow for reasons of consistency throughout the API.
The long-winded version
„…But I don't get this part: Function<? super T, ? extends U>…“
That part is placing a constraint on the specific type that the Function must return. It sounds like you got that part down already though.
The U the Function returns is not just any old U, however. It must have the specific properties (a.k.a „bounds“) declared in the method's parameter section: <U extends Comparable<? super U>>.
„…Why not just have: Function<? super T, U>…“
To put it as simply as I can (because I only think of it simply; versus formally): The reason is because U is not the same type as ? extends U.
Changing Comparable< ? super U > to List< ? super U > and Comparator< T > to Set< T > might make your quandary easier to reason about…
default < U extends List< ? super U > > Set< T > thenComparing(
Function< ? super T, ? extends U > keyExtractor ) {
T input = …;
/* Intuitively, you'd think this would be compliant; it's not! */
/* List< ? extends U > wtf = keyExtractor.apply( input ); */
/* This doesn't comply to „U extends List< ? super U >“ either */
/* ArrayList< ? super U > key = keyExtractor.apply( input ); */
/* This is compliant because key is a „List extends List< ? super U >“
* like the method declaration requires of U
*/
List< ? super U > key = keyExtractor.apply( input );
/* This is compliant because List< E > is a subtype of Collection< E > */
Collection< ? super U > superKey = key;
…
}
„Can't the U just parameterize to whatever the keyExtractor returns, and still extend Comparable<? super U> all the same?…“
I have established experimentally that Function< ? super T, ? extends U > keyExtractor could indeed be refactored to the the more restrictive Function< ? super T, U > keyExtractor and still compile and run perfectly fine. For example, comment/uncomment the /*? extends*/ on line 27 of my experimental UnboundedComparator to observe that all of these calls succeed either way…
…
Function< Object, A > aExtractor = ( obj )-> new B( );
Function< Object, B > bExtractor = ( obj )-> new B( ) ;
Function< Object, C > cExtractor = ( obj )-> new C( ) ;
UnboundedComparator.< Object, A >comparing( aExtractor ).thenComparing( bExtractor );
UnboundedComparator.< Object, A >comparing( bExtractor ).thenComparing( aExtractor );
UnboundedComparator.< Object, A >comparing( bExtractor ).thenComparing( bExtractor );
UnboundedComparator.< Object, B >comparing( bExtractor ).thenComparing( bExtractor );
UnboundedComparator.< Object, B >comparing( bExtractor ).thenComparing( aExtractor );
UnboundedComparator.< Object, B >comparing( bExtractor ).thenComparing( cExtractor );
…
Technically, you could do the equivalent debounding in the real code. From the simple experimentation I've done — on thenComparing() specifically, since that's what your question asks about — I could not find any practical reason to prefer ? extends U over U.
But, of course, I have not exhaustively tested every use case for the method with and without the bounded ? .
I would be surprised if the developers of the JDK haven't exhaustively tested it though.
My experimentation — limited, I admit — convinced me that Comparator.thenComparing(Function< ? super T, ? extends U > keyExtractor) might be declared that way for no other reason than as an idiomatic/house coding convention thing that the JDK development team follows.
Looking at the code base of the JDK it's not unreasonable to presume that somebody somewhere has decreed: «Wherever there's a Function< T, R > the T must have a lower bound (a consumer/you input something) and the R must have an upper bound (a producer/you get something returned to you)».
For obvious reasons though, U is not the same as ? extends U. So the former should not be expected to be substitutable for the latter.
Applying Occam's razor: It's simpler to expect that the exhaustive testing the implementers of the JDK have done has established that the U -upper bounded wildcard is necessary to cover a wider number of use cases.
It seems like your question is regarding type arguments in general so for my answer I will be separating the type arguments you provided from the types they belong to, in my answer, for simplicity.
First we should note that a parameterized type of wildcard is unable to access its members that are of the respective type parameter. This is why, in your specific case the ? extends U can be substituted for U and still work fine.
This won't work in every case. The type argument U does not have the versatility and additional type safety that ? extends U has. Wildcards are a unique type argument in which instantiations of the parameterized types (with wildcard type arguments) are not as restricted by the type argument as they would be if the type argument was a concrete type or type parameter; wildcards are basically place holders that are more general than type parameters and concrete types (when used as type arguments). The first sentence in the java tutorial on wild cards reads:
In generic code, the question mark (?), called the wildcard, represents an unknown type.
To illustrate this point take a look at this
class A <T> {}
now let's make two declarations of this class, one with a concrete type and the other with a wild card and then we'll instantiate them
A <Number> aConcrete = new A <Integer>(); // Compile time error
A <? extends Number> aWild = new A<Integer>() // Works fine
So that should illustrate how a wildcard type argument does not restrict the instantiation as much as a concrete type. But what about a type parameter? The problem with using type parameters is best manifested in a method. To illustrate examine this class:
class C <U> {
void parameterMethod(A<U> a) {}
void wildMethod(A<? extends U> a) {}
void test() {
C <Number> c = new C();
A<Integer> a = new A();
c.parameterMethod(a); // Compile time error
c.wildMethod(a); // Works fine
}
Notice how the references c and a are concrete types. Now this was addressed in another answer, but what wasn't addressed in the other answer is how the concept of type arguments relate to the compile time error(why one type argument causes a compile time error and the other doesn't) and this relation is the reason why the declaration in question is declared with the syntax it's declared with. And that relation is the additional type safety and versatility wildcards provide over type parameters and NOT some typing convention. Now to illustrate this point we will have to give A a member of type parameter, so:
class A<T> { T something; }
The danger of using a type parameter in the parameterMethod() is that the type parameter can be referred to in the form of a cast, which enables access to the something member.
class C<U> {
parameterMethod(A<U> a) { a.something = (U) "Hi"; }
}
Which in turn enables the possibility of heap pollution. With this implementation of the parameterMethod the statement C<Number> c = new C(); in the test() method could cause heap pollution. For this reason, the compiler issues a compile time error when methods with arguments of type parameter are passed any object without a cast from within the type parameters declaring class; equally a member of type parameter will issue a compile time error if it is instantiated to any Object without a cast from within the type parameter's declaring class. The really important thing here to stress is without a cast because you can still pass objects to a method with an argument of type parameter but it must be cast to that type parameter (or in this case, cast to the type containing the type parameter). In my example
void test() {
C <Number> c = new C();
A<Integer> a = new A();
c.parameterMethod(a); // Compile time error
c.wildMethod(a); // Works fine
}
the c.parameterMethod(a) would work if a were cast to A<U>, so if the line looked like this c.parameterMethod((A<U>) a); no compile time error would occur, but you would get a run time castclassexection error if you tried to set an int variable equal to a.something after the parameterMethod() is called (and again, the compiler requires the cast because U could represent anything). This whole scenario would look like this:
void test() {
C <Number> c = new C();
A<Integer> a = new A();
c.parameterMethod((A<U>) a); // No compile time error cuz of cast
int x = a.something; // doesn't issue compile time error and will cause run-time ClassCastException error
}
So because a type parameter can be referenced in the form of a cast, it is illegal to pass an object from within the type parameters declaring class to a method with an argument of a type parameter or containing a type parameter. A wildcard cannot be referenced in the form of a cast, so the a in wildMethod(A<? extends U> a) could not access the T member of A; because of this additional type safety, because this possibility of heap pollution is avoided with a wildcard, the java compiler does permit a concrete type being passed to the wildMethod without a cast when invoked by the reference c in C<Number> c = new C(); equally, this is why a parameterized type of wildcard can be instantiated to a concrete type without a cast. When I say versatility of type arguments, I'm talking about what instantiations they permit in their role of a parameterized type; and when I say additional type safety I'm talking about about the inability to reference wildcards in the form of a cast which circumvents heapPollution.
I don't know why someone would cast a type parameter. But I do know a developer would at least enjoy the versatility of wildcards vs a type parameter. I may have written this confusingly, or perhaps misunderstood your question, your question seems to me to be about type arguments in general instead of this specific declaration. Also if keyExtractor from the declaration Function<? super T, ? extends U> keyExtractor is being used in a way that the members belonging to Function of the second type parameter are never accessed, then again, wildcards are ideal because they can't possibly access those members anyway; so why wouldn't a developer want the versatility mentioned here that wildcards provide? It's only a benefit.

Java 8 Comparator comparing static function

For the comparing source code in Comparator class
public static <T, U extends Comparable<? super U>> Comparator<T> comparing(
Function<? super T, ? extends U> keyExtractor)
{
Objects.requireNonNull(keyExtractor);
return (Comparator<T> & Serializable) (c1, c2) -> keyExtractor.apply(c1).compareTo(keyExtractor.apply(c2));
}
I understand the difference between super and extends. What i dont understand is that why this method have them. Can someone give me an example on what cannot be achieved when the parameter look like this Function<T, U> keyExtractor ?
For example :
Comparator<Employee> employeeNameComparator = Comparator.comparing(Employee::getName);
can also compile with the following function definition
public static <T, U extends Comparable<? super U>> Comparator<T> comparing(
Function<T, U> keyExtractor)
{
Objects.requireNonNull(keyExtractor);
return (Comparator<T> & Serializable) (c1, c2) -> keyExtractor.apply(c1).compareTo(keyExtractor.apply(c2));
}
Here is a simple example: comparing cars by weight. I will first describe the problem in text-form, and then demonstrate every possible way how it can go wrong if either ? extends or ? super is omitted. I also show the ugly partial workarounds that are available in every case. If you prefer code over prose, skip directly to the second part, it should be self-explanatory.
Informal discussion of the problem
First, the contravariant ? super T.
Suppose that you have two classes Car and PhysicalObject such that Car extends PhysicalObject. Now suppose that you have a function Weight that extends Function<PhysicalObject, Double>.
If the declaration were Function<T,U>, then you couldn't reuse the function Weight extends Function<PhysicalObject, Double> to compare two cars, because Function<PhysicalObject, Double> would not conform to Function<Car, Double>. But you obviously want to be able to compare cars by their weight. Therefore, the contravariant ? super T makes sense, so that Function<PhysicalObject, Double> conforms to Function<? super Car, Double>.
Now the covariant ? extends U declaration.
Suppose that you have two classes Real and PositiveReal such that PositiveReal extends Real, and furthermore assume that Real is Comparable.
Suppose that your function Weight from the previous example actually has a slightly more precise type Weight extends Function<PhysicalObject, PositiveReal>. If the declaration of keyExtractor were Function<? super T, U> instead of Function<? super T, ? extends U>, you wouldn't be able to make use of the fact that PositiveReal is also a Real, and therefore two PositiveReals couldn't be compared with each other, even though they implement Comparable<Real>, without the unnecessary restriction Comparable<PositiveReal>.
To summarize: with the declaration Function<? super T, ? extends U>, the Weight extends Function<PhysicalObject, PositiveReal> can be substituted for a Function<? super Car, ? extends Real> to compare Cars using the Comparable<Real>.
I hope this simple example clarifies why such a declaration is useful.
Code: Full enumeration of the consequences when either ? extends or ? super is omitted
Here is a compilable example with a systematic enumeration of all things that can possibly go wrong if we omit either ? super or ? extends. Also, two (ugly) partial work-arounds are shown.
import java.util.function.Function;
import java.util.Comparator;
class HypotheticComparators {
public static <A, B> Comparator<A> badCompare1(Function<A, B> f, Comparator<B> cb) {
return (A a1, A a2) -> cb.compare(f.apply(a1), f.apply(a2));
}
public static <A, B> Comparator<A> badCompare2(Function<? super A, B> f, Comparator<B> cb) {
return (A a1, A a2) -> cb.compare(f.apply(a1), f.apply(a2));
}
public static <A, B> Comparator<A> badCompare3(Function<A, ? extends B> f, Comparator<B> cb) {
return (A a1, A a2) -> cb.compare(f.apply(a1), f.apply(a2));
}
public static <A, B> Comparator<A> goodCompare(Function<? super A, ? extends B> f, Comparator<B> cb) {
return (A a1, A a2) -> cb.compare(f.apply(a1), f.apply(a2));
}
public static void main(String[] args) {
class PhysicalObject { double weight; }
class Car extends PhysicalObject {}
class Real {
private final double value;
Real(double r) {
this.value = r;
}
double getValue() {
return value;
}
}
class PositiveReal extends Real {
PositiveReal(double r) {
super(r);
assert(r > 0.0);
}
}
Comparator<Real> realComparator = (Real r1, Real r2) -> {
double v1 = r1.getValue();
double v2 = r2.getValue();
return v1 < v2 ? 1 : v1 > v2 ? -1 : 0;
};
Function<PhysicalObject, PositiveReal> weight = p -> new PositiveReal(p.weight);
// bad "weight"-function that cannot guarantee that the outputs
// are positive
Function<PhysicalObject, Real> surrealWeight = p -> new Real(p.weight);
// bad weight function that works only on cars
// Note: the implementation contains nothing car-specific,
// it would be the same for every other physical object!
// That means: code duplication!
Function<Car, PositiveReal> carWeight = p -> new PositiveReal(p.weight);
// Example 1
// badCompare1(weight, realComparator); // doesn't compile
//
// type error:
// required: Function<A,B>,Comparator<B>
// found: Function<PhysicalObject,PositiveReal>,Comparator<Real>
// Example 2.1
// Comparator<Car> c2 = badCompare2(weight, realComparator); // doesn't compile
//
// type error:
// required: Function<? super A,B>,Comparator<B>
// found: Function<PhysicalObject,PositiveReal>,Comparator<Real>
// Example 2.2
// This compiles, but for this to work, we had to loosen the output
// type of `weight` to a non-necessarily-positive real number
Comparator<Car> c2_2 = badCompare2(surrealWeight, realComparator);
// Example 3.1
// This doesn't compile, because `Car` is not *exactly* a `PhysicalObject`:
// Comparator<Car> c3_1 = badCompare3(weight, realComparator);
//
// incompatible types: inferred type does not conform to equality constraint(s)
// inferred: Car
// equality constraints(s): Car,PhysicalObject
// Example 3.2
// This works, but with a bad code-duplicated `carWeight` instead of `weight`
Comparator<Car> c3_2 = badCompare3(carWeight, realComparator);
// Example 4
// That's how it's supposed to work: compare cars by their weights. Done!
Comparator<Car> goodComparator = goodCompare(weight, realComparator);
}
}
Related links
Detailed illustration of definition-site covariance and contravariance in Scala: How to check covariant and contravariant position of an element in the function?
Let's say, for example, we want to compare commercial flights by what plane they use. We would therefore need a method that takes in a flight, and returns a plane:
Plane func (CommercialFlight)
That is, of course, a Function<CommercialFlight, Plane>.
Now, the important thing is that the function returns a Plane. It doesn't matter what kind of plane is returned. So a method like this should also work:
CivilianPlane func (CommercialFlight)
Now technically this is a Function<CommercialFlight, CivilianPlane>, which is not the same as a Function<CommercialFlight, Plane>. So without theextends`, this function wouldn't be allowed.
Similarly, the other important thing is that is can accept a CommercialFlight as an argument. So a method like this should also work:
Plane func (Flight)
Technically, this is a Function<Flight, Plane>, which is also not the same as a Function<CommercialFlight, Plane>. So without the super, this function wouldn't be allowed either.

Java Lambda to comparator conversion - intermediate representation

I'm trying to make sense of how Comparator.comparing function works. I created my own comparing method to understand it.
private static <T,U extends Comparable<U>> Comparator<T> comparing(Function<T,U> f) {
BiFunction<T,T,Integer> bfun = (T a, T b) -> f.apply(a).compareTo(f.apply(b));
return (Comparator<T>) bfun;
}
The last line in this function throws an exception.
However, if I change this function to
private static <T,U extends Comparable<U>> Comparator<T> comparing(Function<T,U> f) {
return (T a, T b) -> f.apply(a).compareTo(f.apply(b));
}
It works just fine as expected.
What is the intermediate functional interface which the second attempt uses, which is able to convert the lambda to Comparator?
What is the intermediate functional interface which the second attempt uses, which is able to convert the lambda to Comparator?
The Comparator itself.
Within the second method, you have defined a Comparator, not an intermediate object that has been cast to the Comparator.
The last line in this function throws an exception.
Yes, it should.
If two classes are functional interfaces and have similar methods (with the identical signatures and the same return type), it doesn't mean that they can be used interchangeably.
An interesting trick - you may make a Comparator<T> by referring to the BiFunction<T, T, Integer> bfun's method apply:
private static <T,U extends Comparable<U>> Comparator<T> comparing(Function<T,U> f) {
final BiFunction<T,T,Integer> bfun = (T a, T b) -> f.apply(a).compareTo(f.apply(b));
return bfun::apply; // (a, b) -> bfun.apply(a, b);
}
The intermediate functional interface in your second attempt is simply Comparator<T>:
You can see this because your code-snippet is equivalent to the following:
private static <T,U extends Comparable<U>> Comparator<T> comparing(Function<T,U> f) {
Comparator<T> comparator = (T a, T b) -> f.apply(a).compareTo(f.apply(b));
return comparator;
}

How generics really works as in parameters?

I'm little confused about how the generics works? I'm learning about function API in java and there I just test Function interface and got confused about compose method that how the generics is working in compose method.
Reading the generics on the java official tutorial website I realize that if we have any generic type in the method return or parameters we have to declare that type in the signature of method as explained below.
Here is the method I read in official docs tutorial.
public static <K, V> boolean compare(Pair<K, V> p1, Pair<K, V> p2) {
return p1.getKey().equals(p2.getKey()) &&
p1.getValue().equals(p2.getValue());
}
Above method have two types, K, V which are declared in the signature after the static keyword as but when I read java Function API there is one method called compose and the signature of the compose is as
default <V> Function<V, R> compose(Function<? super V, ? extends T> before) {
Objects.requireNonNull(before);
return (V v) -> apply(before.apply(v));
}
1) The first question where is the T & R declared? which are being used in the return type and in the parameter. Or my understanding is wrong?
Then I read more in generics tutorials and then I try to understand the concept of super and extends in generics and read here then I test compose method more and then confused again about how the super and extends works in the compose method?
public static void main(String... args){
Function<Integer, String> one = (i) -> i.toString();
Function<String, Integer> two = (i) -> Integer.parseInt(i);
one.compose(two);
}
As above I have declared two Function with lamdas. One is having Integer input and String output the other one is reversed from it.
2) The second question is that how Integer and String are related to extends and super? There is no relation between String and Integer class no one is extending each other then how it is working?
I tried my best to explain my question/problem. Let me know what you didn't understand I will try again.
Where are T and R defined?
Remember, compose is declared in the Function interface. It can not only use generic parameters of its own, but also the type's generic parameters. R and T are declared in the interface declaration:
interface Function<T, R> {
...
}
What are ? extends and ? super?
? is wildcard. It means that the generic parameter can be anything. extends and super give constraints to the wildcard. ? super V means that whatever ? is, it must be a superclass of V or V itself. ? extends T means that whatever ? is, it must be a subclass of T or T itself.
Now let's look at this:
Function<Integer, String> one = (i) -> i.toString();
Function<String, Integer> two = (i) -> Integer.parseInt(i);
one.compose(two);
From this, we can deduce that T is Integer and R is String. What is V? V must be some type such that the constraints Function<? super V, ? extends T> is satisfied.
We can do this by substituting the argument we passed in - Function<String, Integer> - to get String super V and Integer extends Integer.
The second constraint is satisfied already while the first constraint now says that String must be a super class of V or String itself. String cannot have subclasses so V must be String.
Hence, you can write something like:
Function<String, String> f = one.compose(two);
but not
Function<Integer, String> f = one.compose(two);
When you compose a Function<Integer, String> and a Function<String, Integer> you cannot possibly get a Function<Integer, String>. If you try to do this, V is automatically inferred to be Integer. But String super Integer is not satisfied, so the compilation fails. See the use of the constraints now? It is to avoid programmers writing things that don't make sense. Another use of the constraints is to allow you to do something like this:
Function<A, B> one = ...
Function<C, SubclassOfB> two = ...
Function<SubclassOfC, B> f = one.compose(two);
There is no relationship between Integer and String in this case, it's all about V.
1) The compose function is part of Interface Function<T,R>. As you can see in documentation for this interface:
Type Parameters:
T - the type of the input to the function
R - the type of the result of the function
2) The super and extends constraints in questions aren't applied to T & R, they're applied to the generic type parameters of a function that you pass in as an argument to the compose function.
Basically this means that if you have:
Function<ClassA, ClassB> one;
Function<SomeSuperClassOfC, SomeSubclassOfA> two;
then it's valid to call
Function<ClassC, ClassB> three = one.compose(two)
I will try to explain from zero;
interface Function<T, R> - this is interface with one method, which must be implemented R apply (T);
in Java prior to 8 we must write:
Function<Integer, String> one = new Function<Integer, String>() {
#Override
public String apply(Integer i) {
return i.toString();
}
};
now you can use it:
String resultApply = one.apply(5);
now, I think, you get the idea.

Categories