I have a given function call and java gives me an error because Objects are not comparable to ints (of course...). Can someone explain to me what I have to change?
I tried to brace the lambda expression differently but with no useful result. I think, that the lambda expression is correct and the filter-function is slightly wrong, but I'm not able to find out my mistake...
// function call
filter(v -> v > 5)
// function
public Optional<T> filter(Predicate<T> tester) {
if(isPresent() && tester.test(get())) {
return this;
} else {
return Optional.empty();
}
}
I would expect a Optional.empty-Object but I get a java-error because v > 5 Object v is not comparable to an int.
You have to make T a wrapper class which is comparable with an int. e.g.
IntStream.range(0, 10)
.filter(v -> v > 5)
.forEach(System.out::println);
is fine because v is an int.
You can't use this expression when T is unknown.
What you can do is assume the T must be a number e.g.
filter( v -> ((Number) v).doubleValue() > 5)
however this will produce a ClassCastExpection is T is another type.
The real solution is to make T a Number
e.g.
class MyClass<T extends Number> {
public Optional<T> filter(Predicate<T> test) {
or make it a specific type like int
class MyClass {
public IntOptional filter(IntPredicate test) {
In Java primitives types (e.g. int) and objects (e.g. Object) don't have a common ancestor in the type hierarchy. Due to that predicates and other stream constructs come in two flavors e.g. there is IntPredicate that you have to use when working with int and Predicate that you have to use when working with Object.
On way to write your filter function would be to use OptionalInt and IntPredicate:
public OptionalInt filter(IntPredicate tester) {
if (isPresent() && tester.test(get())) {
return ...
} else {
return OptionalInt.empty();
}
}
v -> v > 5 can mean different things. It depends on context.
It could be a (Object v) -> v > 5 causing a compilation error since > can't be applied to an Object:
Stream.<Object>of("123", 123).filter(v -> v > 5);
It could be a (Integer v) -> v > 5 meaning that unboxing and autoboxing will be performed in order to do the comparison and to return the result:
Stream.<Integer>of(123, 123).filter(v -> v > 5);
It could be a (int v) -> v > 5 meaning that it's an instance of IntPredicate and things will go smoothly here:
IntStream.of(123, 123).filter(v -> v > 5);
I think, that the lambda expression is correct and the
filter-function is slightly wrong, but I'm not able to find out my
mistake...
You are right.
Your method seems to defeat the generic type declared for the class as first of all your method is defined inside a generic class.
Supposing your class is named Foo, here the filter() method relies on the generic T type as return/parameter type :
public class Foo<T>{
// ...
public Optional<T> filter(Predicate<T> tester) {
// ...
}
}
It works with inference.
So you get Predicate of T. But the T depends on the generic type defined in the class and also from the way which you declared the instance of the Foo class.
And it appears that here T is not a Number.
As alternative you could also rely on inference from the declared Foo variable.
If you do that :
Foo<Integer> foo = new Foo<>();
Optional<Integer> optInt = foo.filter(v -> v > 5);
it will compile fine as Integer will be inferred from Foo<Integer>.
So I think that to solve your issue, you should either declare Number or Integer as base class of the generic type :
public class Foo<T extends Integer>{
// ...
public Optional<T> filter(Predicate<T> tester) {
// ...
}
}
or rely on the inference of the client as in the previous example.
Related
I have a class MyGen which takes two generic arguments and has a method getValue()
public interface MyGen<E,T> {
T getValue();
}
Usually the second generic type would be either Long or Integer.
Then, I wrote a method to combine the objects when the second generic type is Long as below:
public static <E extends MyGen<?, Long>> long combineValue(Set<E> set) {
return set.stream()
.map(MyGen::getValue)
.reduce(0L, (a,b) -> a | b);
}
Now I want to have a similar method when the second type is an Integer. So I tried updating the above same method to :
public static <E extends MyGen<?, ? extends Number>> long combineValue(Set<E> set) {
return (long) set.stream()
.map(MyGen::getValue)
.reduce((a,b) -> a | b) // error1
.orElse(0); // error2
}
But the following errors are displayed:
error1
The operator | is undefined for the argument type(s) capture#4-of ?
extends java.lang.Number, capture#4-of ? extends java.lang.Number
error2
The method orElse(capture#4-of ? extends Number) in the type
Optional<capture#4-of ? extends Number> is not applicable for the
arguments (int)
Is there any way to handle both Long and Integer in the same method or could it be done only using two separate methods?
You seem to be fine about the return value of this common method being long. In that case, it is possible, because you can get a long from a Number using longValue:
public static long combineValue(Set<? extends MyGen<?, ? extends Number>> set) {
return set.stream()
.map(MyGen::getValue)
.mapToLong(Number::longValue)
.reduce(0L, (a,b) -> a | b);
}
You basically turn every number into longs first, and then do the reduction.
I have a (as I understand) monad in place made for specific domain, which acts as a combination of Either/Try and an action for rest endpoint call. It's a class with a single type generic parameter for it's contained value. Because some functions are only written for side effect and don't care for return value, I commonly annotated their return simply as MyMonadThing<?>. However, such pattern gives me issue when trying to fork logic at some places. I won't show the monad class, but here is example using lists that illustrates the problem
public static void main(String[] args) {
map(Arrays.asList(1, 2, 3), t -> {
return t == 2? foo() : bar();
});
}
static <T, NewT> List<NewT> map(List<T> lst, Function<T, List<NewT>> fn) {
return lst.stream().flatMap(t -> fn.apply(t).stream()).collect(Collectors.toList());
}
static List<?> foo() {
return Arrays.asList("foo");
}
static List<?> bar() {
return Arrays.asList("bar");
}
This won't compile because of ternary.
- Type mismatch: cannot convert from List<capture#2-of ?> to
List<Object>
I can fix this by casting (ie return (List<?>) (t == 2? foo() : bar());) but that's just odd. Suppose java did allow this to run, and the result in main would be simply of type List<?> -- under what circumstances could this result in an actual error due to unsound type system?
I've encountered a type safety issue while applying Mockito's argument matchers.
Given the following interface:
interface SomeInterface {
int method(Object x);
}
I'm trying to mock its only method and call it with the parameter that differs from the matcher type:
SomeInterface someInterface = mock(SomeInterface.class);
when(someInterface.method(argThat((ArgumentMatcher<Integer>) integer -> integer == 42))).thenReturn(42);
someInterface.method("X"); // Throws ClassCastException
But the method invocation someInterface.method("X") produces the exception, namely:
java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Integer
However, when I expand a lambda to the anonymous class, everything works fine:
SomeInterface someInterface = mock(SomeInterface.class);
when(someInterface.method(argThat(new ArgumentMatcher<Integer>() {
#Override
public boolean matches(Integer integer) {
return integer == 42;
}
}))).thenReturn(42);
someInterface.method("X"); // OK, method invokes normally
As I can see from Mockito sources, the type of the matcher parameter is compared with the type of actual invocation argument. If the actual argument doesn't subclass the matcher's method parameter type (and thus can not be assigned to it), the matching is not performed:
private static boolean isCompatible(ArgumentMatcher<?> argumentMatcher, Object argument) {
if (argument == null) {
return true;
} else {
Class<?> expectedArgumentType = getArgumentType(argumentMatcher);
return expectedArgumentType.isInstance(argument);
}
}
However, it seems that this check doesn't pass for lambdas, apparently since it's not possible to retrieve the actual type of the lambda parameter at runtime (it's always just an Object type). Am I right about it?
I use mockito-core 3.0.3.
My Java configuration:
java version "1.8.0_151"
Java(TM) SE Runtime Environment (build 1.8.0_151-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode)
My first reaction would be “just use intThat(i -> i == 42)”, but apparently, the implementation is dropping the information that it has an ArgumentMatcher<Integer> and later-on relying on the same Reflection approach that doesn’t work.
You can’t get the lambda parameter type reflectively and there are existing Q&As explaining why it’s not possible and not even intended. Note that this is not even lambda specific. There are scenarios with ordinary classes, where the type is not available too.
In the end, there is no point in trying to make this automatism work, when it requires additional declarations on the use side, like the explicit type cast (ArgumentMatcher<Integer>). E.g. instead of
argThat((ArgumentMatcher<Integer>) integer -> integer == 42)
you could also use
argThat(obj -> obj instanceof Integer && (Integer)obj == 42)
or even
argThat(Integer.valueOf(42)::equals)
Though, when we are at these trivial examples, eq(42) would do as well or even when(someInterface.method(42)).thenReturn(42).
However, when you often have to match complex integer expressions in such contexts with broader argument types, a solution close to your original attempt is to declare a reusable fixed type matcher interface
interface IntArgMatcher extends ArgumentMatcher<Integer> {
#Override boolean matches(Integer arg0);
}
and use
when(someInterface.method(argThat((IntArgMatcher)i -> i == 42))).thenReturn(42);
i == 42 being a placeholder for a more complex expression
#Holger's answer is 100% correct, but, struggling with a similar issue myself, I wanted to offer some additional thoughts.
This question demonstrates nicely that lambda expressions are completely different from anonymous classes, and that the former is not simply a shorter way of expressing the latter.
Let's say we define a method that determines the first implemented interface of a Function:
Type firstImplementedInterface(Function<?, ?> function)
{
return function.getClass().getGenericInterfaces()[0];
}
The following code will print the results for an anonymous class and for a lambda expression:
Function<Integer, Integer> plusOne = new Function<Integer, Integer>()
{
#Override
public Integer apply(Integer number)
{
return number+1;
}
};
Function<Integer, Integer> plusOneLambda = number -> number+1;
System.err.println(firstImplementedInterface(plusOne));
System.err.println(firstImplementedInterface(plusOneLambda));
Notably, the results are different:
java.util.function.Function<java.lang.Integer, java.lang.Integer>
interface java.util.function.Function
For the anonymous class, the result is a ParameterizedType that preserves both type parameters of the function, whereas the result for the lambda expression is a raw type without type parameters.
Now, let's say we wanted to implement a type-safe dispatch (e.g., as an alternative to casting), similar to what Scala would allow us to do. In other words, given a number of functions, the dispatch method chooses the function whose parameter type matches the dispatch object's type and applies that function:
String getTypeOf(Object object)
{
return dispatch(object,
new Function<Number, String>()
{
#Override
public String apply(Number t)
{
return "number";
}
},
new Function<Boolean, String>()
{
#Override
public String apply(Boolean b)
{
return "boolean";
}
});
}
In a very simplified version, such a dispatch method could be implemented as:
#SafeVarargs
public final <T> T dispatch(Object dispatchObject, Function<?, T>... cases)
{
#SuppressWarnings("unchecked")
T result = Stream.of(cases)
.filter(function -> parameterType(function).isAssignableFrom(dispatchObject.getClass()))
.findFirst()
.map(function -> ((Function<Object, T>)function).apply(dispatchObject))
.orElseThrow(NoSuchElementException::new);
return result;
}
Class<?> parameterType(Function<?, ?> function)
{
ParameterizedType type = (ParameterizedType)function.getClass().getGenericInterfaces()[0];
Type parameterType = type.getActualTypeArguments()[0];
return (Class<?>)parameterType;
}
If we run
System.err.println(getTypeOf(42));
System.err.println(getTypeOf(true));
we get the expected
number
boolean
However, if we were to run
System.err.println(dispatcher.<String>dispatch(Math.PI, (Number n) -> "number", (Boolean b) -> "boolean"));
using lambda expressions instead of anonymous classes, the code will fail with a ClassCastException: java.lang.Class cannot be cast to java.lang.reflect.ParameterizedType, because the lambda expression implements a raw type.
This is one of the things that works with anonymous classes but not with lambda expressions.
Is any easiest way to write this code below, without using toStream()?
import io.vavr.collection.List;
import io.vavr.control.Option;
import lombok.Value;
public class VavrDemo {
public static void main(String[] args) {
Foo bar = new Foo(List.of(new Bar(1), new Bar(2)));
Number value = Option.some(bar)
.toStream() // <- WTF?!?
.flatMap(Foo::getBars)
.map(Bar::getValue)
.sum();
System.out.println(value);
}
#Value
static class Foo {
private List<Bar> bars;
}
#Value
static class Bar {
private int value;
}
}
Option is a so-called Monad. This just tells us that the flatMap function follows specific laws, namely
Let
A, B, C be types
unit: A -> Monad<A> a constructor
f: A -> Monad<B>, g: B -> Monad<C> functions
a be an object of type A
m be an object of type Monad<A>
Then all instances of the Monad interface should obey the Functor laws (omitted here) and the three control laws:
Left identity: unit(a).flatMap(f) ≡ f a
Right identity: m.flatMap(unit) ≡ m
Associativity: m.flatMap(f).flatMap(g) ≡ m.flatMap(x -> f.apply(x).flatMap(g))
Currently Vavr has (simplified):
interface Option<T> {
<U> Option<U> flatMap(Function<T, Option<U>> mapper) {
return isEmpty() ? none() : mapper.apply(get());
}
}
This version obeys the Monad laws.
It is not possible to define an Option.flatMap the way you want that still obeys the Monad laws. For example imagine a flatMap version that accepts a function with an Iterable as result. All Vavr collections have such a flatMap method but for Option it does not make sense:
interface Option<T> {
<U> Option<U> flatMap(Function<T, Iterable<U>> mapper) {
if (isEmpty()) {
return none();
} else {
Iterable<U> iterable = mapper.apply(get());
if (isEmpty(iterable)) {
return none();
} else {
U resultValue = whatToDoWith(iterable); // ???
return some(resultValue);
}
}
}
}
You see? The best thing we can do is to take just one element of the iterable in case it is not empty. Beside it does not give use the result you may have expected (in VavrTest above), we can proof that this 'phantasy' version of flatMap does break the Monad laws.
If you are stuck in such a situation, consider to change your calls slightly. For example the VavrTest can be expressed like this:
Number value = Option.some(bar)
.map(b -> b.getBars().map(Bar::getValue).sum())
.getOrElse(0);
I hope this helps and the Monad section above does not completely scare you away. In fact, developers do not need to know anything about Monads in order to take advantage of Vavr.
Disclaimer: I'm the creator of Vavr (formerly: Javaslang)
How about using .fold() or .getOrElse()?
Option.some(bar)
.fold(List::<Bar>empty, Foo::getBars)
.map(Bar::getValue)
.sum();
Option.some(bar)
.map(Foo::getBars)
.getOrElse(List::empty)
.map(Bar::getValue)
.sum();
I'm little confused about how the generics works? I'm learning about function API in java and there I just test Function interface and got confused about compose method that how the generics is working in compose method.
Reading the generics on the java official tutorial website I realize that if we have any generic type in the method return or parameters we have to declare that type in the signature of method as explained below.
Here is the method I read in official docs tutorial.
public static <K, V> boolean compare(Pair<K, V> p1, Pair<K, V> p2) {
return p1.getKey().equals(p2.getKey()) &&
p1.getValue().equals(p2.getValue());
}
Above method have two types, K, V which are declared in the signature after the static keyword as but when I read java Function API there is one method called compose and the signature of the compose is as
default <V> Function<V, R> compose(Function<? super V, ? extends T> before) {
Objects.requireNonNull(before);
return (V v) -> apply(before.apply(v));
}
1) The first question where is the T & R declared? which are being used in the return type and in the parameter. Or my understanding is wrong?
Then I read more in generics tutorials and then I try to understand the concept of super and extends in generics and read here then I test compose method more and then confused again about how the super and extends works in the compose method?
public static void main(String... args){
Function<Integer, String> one = (i) -> i.toString();
Function<String, Integer> two = (i) -> Integer.parseInt(i);
one.compose(two);
}
As above I have declared two Function with lamdas. One is having Integer input and String output the other one is reversed from it.
2) The second question is that how Integer and String are related to extends and super? There is no relation between String and Integer class no one is extending each other then how it is working?
I tried my best to explain my question/problem. Let me know what you didn't understand I will try again.
Where are T and R defined?
Remember, compose is declared in the Function interface. It can not only use generic parameters of its own, but also the type's generic parameters. R and T are declared in the interface declaration:
interface Function<T, R> {
...
}
What are ? extends and ? super?
? is wildcard. It means that the generic parameter can be anything. extends and super give constraints to the wildcard. ? super V means that whatever ? is, it must be a superclass of V or V itself. ? extends T means that whatever ? is, it must be a subclass of T or T itself.
Now let's look at this:
Function<Integer, String> one = (i) -> i.toString();
Function<String, Integer> two = (i) -> Integer.parseInt(i);
one.compose(two);
From this, we can deduce that T is Integer and R is String. What is V? V must be some type such that the constraints Function<? super V, ? extends T> is satisfied.
We can do this by substituting the argument we passed in - Function<String, Integer> - to get String super V and Integer extends Integer.
The second constraint is satisfied already while the first constraint now says that String must be a super class of V or String itself. String cannot have subclasses so V must be String.
Hence, you can write something like:
Function<String, String> f = one.compose(two);
but not
Function<Integer, String> f = one.compose(two);
When you compose a Function<Integer, String> and a Function<String, Integer> you cannot possibly get a Function<Integer, String>. If you try to do this, V is automatically inferred to be Integer. But String super Integer is not satisfied, so the compilation fails. See the use of the constraints now? It is to avoid programmers writing things that don't make sense. Another use of the constraints is to allow you to do something like this:
Function<A, B> one = ...
Function<C, SubclassOfB> two = ...
Function<SubclassOfC, B> f = one.compose(two);
There is no relationship between Integer and String in this case, it's all about V.
1) The compose function is part of Interface Function<T,R>. As you can see in documentation for this interface:
Type Parameters:
T - the type of the input to the function
R - the type of the result of the function
2) The super and extends constraints in questions aren't applied to T & R, they're applied to the generic type parameters of a function that you pass in as an argument to the compose function.
Basically this means that if you have:
Function<ClassA, ClassB> one;
Function<SomeSuperClassOfC, SomeSubclassOfA> two;
then it's valid to call
Function<ClassC, ClassB> three = one.compose(two)
I will try to explain from zero;
interface Function<T, R> - this is interface with one method, which must be implemented R apply (T);
in Java prior to 8 we must write:
Function<Integer, String> one = new Function<Integer, String>() {
#Override
public String apply(Integer i) {
return i.toString();
}
};
now you can use it:
String resultApply = one.apply(5);
now, I think, you get the idea.