Because of inheritance, a leaf-class has to have a Function<T, R> which it uses as a Supplier<R>. (Ignore the passed parameter)
What is the best way to convey that the generic parameter type is not used and that null should get passed?
To convey the notion that some variable, return type, parameter etc. can only be null the type Void can be used - there's only one valid value for this: null.
So in your case you could use Function<Void, R> to indicate apply(null) will always be called, so the function needs to be mapped to a supplier of sorts.
In general you'd want to use Supplier<R> instead but in cases where Function is required this could be an option.
The same would be true for consumers that would need to be represented as functions: use Function<T, Void> in those cases and return null.
Use java.lang.Void
The Void class is an uninstantiable placeholder class to hold a reference to the Class object representing the Java keyword void.
It can't be instantiated, so only null can be passed as parameter.
class SomeClass implement Function<Void, YourResultClass> {
//implement apply
}
Related
I have a class with a type parameter.
class MyObject<IdType> {
#Setter
#Getter
private IdType id;
}
And I thought I can add some method for conveniency so I did.
<T extends MyObject<? super IdType>> void copyIdTo(T object) {
object.setId(getId());
}
< T extends MyObject<? extends IdType>> void copyIdFrom(T object) {
object.copyIdTo(this);
}
And I just realized that I can do this.
void copyIdTo(MyObject<? super IdType> object) {
object.setId(getId());
}
void copyIdFrom(MyObject<? extends IdType> object) {
object.copyIdTo(this);
}
Are those two sets of methods are equivalent? Which way (or style) is prefer?
In your case, the two approaches are effectively equivalent. They both restrict the argument's type to MyObject<...> or a subtype.
Since your example methods return void there's no real benefit from making the method generic. The only important thing for your method is that the argument is a MyObject<...>—beyond that the real type is meaningless. Adding the ability to make the argument's type more specific adds nothing for the method's implementation and does nothing for the caller. In other words, it's irrelevant "fluff".
So for your examples, I would say prefer the non-generic option. It's cleaner and more straightforward.
However, if your methods returned the given argument back to the caller then making the method generic could prove useful; it would allow you to declare the return type as T. This opens up possibilities to the caller such as method chaining or invoking the method "inside" another method call, all based on the specific type passed as an argument. An example of this in the core library would be Objects.requireNonNull(T).
Another good case for making the method generic is mentioned by #Thilo in the comments:
Another case would be if your method takes multiple arguments. Then you can introduce a T to make sure those two arguments have the same type (instead of two distinct types that happen to [fulfill] the constraints individually).
Yes they are equivalent. Both sets of methods declare the same thing - that the method parameter must be of type MyObject<> or a compatible subtype (subclass).
The only reason to declare T in this way is if you need to refer to T elsewhere, such as the return type of the method, or if you have multiple parameters of the same type, or inside the method body.
I would always prefer the shorter, simpler, clearer version with less angle brackets to hurt the eyeballs :)
This is a question that I wondered about since the lambdas had been introduced in Java, and inspired by a related question, I thought that I might bring it up here, to see whether there are any ideas.
(Side notes: There is a similar question for C#, but I did not find one for Java. The questions for Java about "storing a lambda in a variable" always referred to cases where the type of the variable was fixed - this is exactly what I'm trying to circumvent)
Lambda expressions receive the type that they need, via target type inference. This is all handled by the compiler. For example, the functions
static void useF(Function<Integer, Boolean> f) { ... }
static void useP(Predicate<Integer> p) { ... }
can both be called with the same lambda expression:
useF(x -> true);
useP(x -> true);
The expression will once manifest itself as a class implementing the Function<Integer,Boolean> interface, and once as a class implementing the Predicate<Integer> interface.
But unfortunately, there is no way of storing the lambda expression with a type that is applicable to both functions, like in
GenericLambdaTypelambda = x -> true;
This "generic lambda type" would have to encode the type of the method that can be implemented by the given lambda expression. So in this case, it would be
(Ljava.lang.Integer)Ljava.lang.Booleanlambda = x -> true;
(based on the standard type signatures, for illustration). (This is not completely unreasonable: The C++ lambda expressions are basically doing exactly that...)
So is there any way to prevent a lambda expression being resolved to one particular type?
Particularly, is there any trick or workaround that allows the useF and useP methods sketched above to be called with the same object, as in
useF(theObject);
useP(theObject);
This is unlikely, so I assume the answer will plainly be: "No", but: Could there be any way to write a generic, magic adaption method like
useF(convertToRequiredTargetType(theObject));
useP(convertToRequiredTargetType(theObject));
?
Note that this question is more out of curiosity. So I'm literally looking for any way to achieve this (except for custom precompilers or bytecode manipulation).
There seem to be no simple workarounds. A naive attempt to defer the type inference, by wrapping the expression into a generic helper method, as in
static <T> T provide()
{
return x -> true;
}
of course fails, stating that "The target type of this expression must be a functional interface" (the type can simply not be inferred here). But I also considered other options, like MethodHandles, brutal unchecked casts or nasty reflection hacks. Everything seems to be lost immediately after the compilation, where the lambda is hidden in an anonymous object of an anonymous class, whose only method is called via InvokeVirtual...
I don't see any way of allowing a lambda expression that resolves to one particular functional interface type to be interpreted directly as an equivalent functional interface type. There is no superinterface or "generic lambda type" that both functional interfaces extend or could extend, i.e. that enforces that it takes exactly one parameter of exactly one specific type and returns a specific type.
But you can write a utility class with methods to convert from one type of functional interface to another.
This utility class converts predicates to functions that return booleans and vice versa. It includes identity conversions so that you don't have to worry about whether to call the conversion method.
public class Convert
{
static <T> Predicate<T> toPred(Function<? super T, Boolean> func)
{
return func::apply;
}
static <T> Predicate<T> toPred(Predicate<? super T> pred)
{
return pred::test;
}
static <T> Function<T, Boolean> toFunc(Predicate<? super T> pred)
{
return pred::test;
}
static <T> Function<T, Boolean> toFunc(Function<? super T, Boolean> func)
{
return func::apply;
}
}
The input functions and predicates are consumers of T, so PECS dictates ? super. You could also add other overloads that could take BooleanSuppliers, Supplier<Boolean>, or any other functional interface type that returns a boolean or Boolean.
This test code compiles. It allows you to pass a variable of a functional interface type and convert it to the desired functional interface type. If you already have the exact functional interface type, you don't have to call the conversion method, but you can if you want.
public class Main
{
public static void main(String[] args)
{
Function<Integer, Boolean> func = x -> true;
useF(func);
useF(Convert.toFunc(func));
useP(Convert.toPred(func));
Predicate<Integer> pred = x -> true;
useP(pred);
useP(Convert.toPred(pred));
useF(Convert.toFunc(pred));
}
static void useF(Function<Integer, Boolean> f) {
System.out.println("function: " + f.apply(1));
}
static void useP(Predicate<Integer> p) {
System.out.println("predicate: " + p.test(1));
}
}
The output is:
function: true
function: true
predicate: true
predicate: true
predicate: true
function: true
Suppose in a shiny future (say Java 10 or 11) we have true function types which allow to specify a function without forcing it to be of a particular conventional Java type (and being some kind of value rather than an object, etc.). Then we still have the issue that the existing methods
static void useF(Function<Integer, Boolean> f) { ... }
static void useP(Predicate<Integer> p) { ... }
expect a Java object implementing a conventional Java interface and behaving like Java objects do, i.e. not suddenly changing the result of theObject instanceof Function or theObject instanceof Predicate. This implies that it will not be the generic function that suddenly starts implementing the required interface when being passed to either of these methods but rather that some kind of capture conversion applies, producing an object implementing the required target interface, much like today, when you pass a lambda expression to either of these methods or when you convert a Predicate to a Function using p::test (or vice versa using f::apply).
So what won’t happen is that you are passing the same object to both methods. You only have an implicit conversion, which will determined at compile-time and likely made explicit in byte code just as with today’s lambda expressions.
A generic method like convertToRequiredTargetType can’t work because it has no knowledge about the target type. The only solutions to make such a thing work are the ones you have precluded, precompilers and byte code manipulation. You could create a method accepting an additional parameter, a Class object describing the require interface, which delegates to the LambdaMetaFactory but that method would have to redo everything the compiler does, determining the functional signature, the name of the method to implement, etc.
For no benefit, as invoking that utility method like convertToRequiredTargetType(theObject) (or actually convertToRequiredTargetType(theObject, Function.class)) is in no way simpler than, e.g. theObject::test). Your desire to create such a method caused your weird statement “Everything seems to be lost immediately after the compilation, where the lambda is hidden in an anonymous object of an anonymous class” when actually, you have an object implementing a functional interface with a known signature and therefore can be converted as simple as function::methodName (where the IDE can complete the method name for you, if you have forgotten)…
I'd like to know if something like this is possible in Java (mix of C++ and Java ahead)
template<typename T> bool compare(Wrapper wrapper) {
if(wrapper.obj.getClass().equals(T.class))
return true
return false
}
To clarify, the function takes in an object which contains a java.lang.object, but I'd like to be able to pass that wrapper into this generic comparison function to check whether that object is of a particular type, ie
if(compare<String>(myWrapper))
// do x
No, it's not possible due to erasure. Basically, the compare method has no idea what T is. There's only one compare method (as opposed to C++, where there's one per T), and it isn't given any information about how it was invoked (ie, what the caller considered its T to be).
The typical solution is to have the class (or method) accept a Class<T> cls, and then use cls.isInstance:
public <T> boolean compare(Wrapper wrapper, Class<T> cls) {
return cls.isInstance(wrapper.obj);
}
// and then, at the call site:
if (compare(wrapper, Foo.class)) {
...
}
Of course, this means that the call site needs to have the Class<T> object. If that call site is itself a generic method, it needs to get that reference from its caller, and so on. At some point, somebody needs to know what the specific type is, and that somebody passes in Foo.class.
You cannot reference static members of a type parameter (such as you try to do in the form of T.class). You also cannot use them meaningfully in instanceof expressions. More generally, because Java generics are implemented via type erasure, you cannot use type parameters in any way at run time -- all type analysis is performed statically, at compile time.
Depending on exactly what you're after, there are at least two alternative approaches.
The first, and more usual, is to ensure that the necessary types can be checked statically. For example, you might parameterize your Wrapper class with the type of the object it wraps. Then, supposing that you use it in a program that is type-safe, wherever you have a Wrapper<String> you know that the wrapped object is a String.
That doesn't work so well if you want to verify the specific class of the wrapped object, however, when the class to test against is not final. In that case, you can pass a Class object, something like this:
<T> boolean compare(Wrapper<? super T> wrapper, Class<T> clazz) {
return wrapper.obj.getClass().equals(clazz);
}
That checks the class of the wrapped object against the specified class, allowing the method to be invoked only in cases where static analysis allows that it could return true.
You can actually combine those two approaches, if you like, to create a Wrapper class whose instances can hold only members of a specific class, as opposed to any object that is assignable to a given type. I'm not sure why you would want to do that, though.
Let's say I have method which takes a java.util.function.Predicate and return CompletableFuture:
public <R> CompletableFuture<R> call(Predicate<R> request) {
return new CompletableFuture<>();
}
If I call this method using an anonymous class like this:
Integer a = call(new Predicate<Integer>() {
#Override
public boolean test(Integer o) {
return false;
}
}).join();
It works because I have to explicitly declare the type of the Predicate. However, if I use lambda expression instead of anonymous class like this:
Integer a = call(o -> false).join();
It doesn't compile because Java thinks that it's a Predicate<Object> and returns a message like this:
Error:(126, 42) java: incompatible types: java.lang.Object cannot be converted to java.lang.Integer
There are a few workarounds that I found. I may create the CompletableFuture variable explicitly instead of chaining, or add an unnecessary extra argument Class<R> that tells Java which type we want to get or force the type in lambda expression.
However I wonder why Java chooses Object instead of Integer in the first lambda example, it already knows which type I want to get so the compiler may use the most specific type instead of Object because all of the three workarounds seems ugly to me.
There are limits to Java 8's type inference, and it does not look at the type of the variable that you assign the result to. Instead, it infers the type Object for the type parameter.
You can fix it by explicitly specifying the type of the argument o in the lambda:
Integer a = call((Integer o) -> false).join();
Other people have answered how to force the lambda to the right type. However, I would argue that Predicate<Object> is not "wrong" for that predicate and you should be allowed to use it -- you have a predicate on all objects -- it should work on all objects including Integers; why do you need to make a more restricted predicate on Integers? You shouldn't.
I would argue that the real problem is the signature of your call method. It's too strict. Predicate is a consumer of its type parameter (it only has a method that takes its type parameter type and returns boolean); therefore, according to the PECS rule (Producer extends, consumer super), you should always use it with a super bounded wildcard.
So you should declare your method like this:
public <R> CompletableFuture<R> call(Predicate<? super R> request)
No matter what code you had in this method, it will still compile after changing the bound to super, because Predicate is a consumer. And now, you don't need to force it to be Predicate<Integer> -- you can use a Predicate<Object> and still have it return a CompletableFuture<Integer>.
The Generic Methods tutorial has this helpful example:
public <T extends E> boolean addAll(Collection<T> c);
However, [...] the type parameter T is used only once. The return type
doesn't depend on the type parameter, nor does any other argument to
the method (in this case, there simply is only one argument). [...]
If that is the case, one should use wildcards.
The codebase of the project I am working on has a few methods like this:
public <T extends Something> T getThing();
and (not in the same interface)
public <D> void storeData(int id, D data);
Is there any point in having the method type parameter instead of using the bound (Something above, Object below) directly?
(Note that in the former case, all of the few implementations are annotated with #SuppressWarnings("unchecked") and the point could be to hide this warning from the user of the method, but I am not sure this is a laudable achievement.
In the latter case, some implementations use reflection to store instances of different classes differently, but I do not see how this is facilitated by the type parameter.)
There are five different cases of a type parameter appearing only once to consider.
1) Once in return type position:
1.a) Return type is the type variable
public <T extends Something> T getThing();
This should be a red flag: The caller can arbitrarily choose an expected return type and the callee has no way of knowing the chosen type. In other words the implementation can't guarantee the returned value will be of the specified return type unless it (a) never returns, (b) always throws an exception or (c) always returns null. In all of these cases the return type happens to be irrelevant altogether.
(Personally I don't mind methods like these if the code is very "dynamic". I.e. you're running the risk of a class cast exception anyway and the method boundary is still early enough for the exception to be raised. A good example is deserialzation. All parties calling the method have to know and understand this though..)
1.b) Type variable is contained in return type, but not the return type itself
Very common and valid. Examples are the various factory methods in guava, like Lists.newArrayList().
2) Once in parameter type position:
2.a) Simple type parameter
public static <E> void shuffle(List<E> list);
Note that the implementation actually needs the type parameter in order to shuffle the elements. Nonetheless, the caller should not have to be bothered with it. You can write an internal helper method that "captures" the wildcard:
public static void shuffle(List<?> list) {
shuffleWithCapture(list);
}
private static <E> void shuffleWithCapture(List<E> list) {
// implementation
}
2.b) Type parameter with multiple bounds
public static <T extends Foo & Bar> void foo(T);
public static <E extends Foo & Bar> void foo(List<E>);
Since Java does not allow intersection types anywhere but in type parameter bounds, this is the only way to express these signatures.
2.c) Type parameter bound contains the type parameter variable
public static <T extends Comparable<? super T>> void sort(List<T> list);
To express that the elements of the list must be comparable with each other, one needs a name for their type. There is no way to eliminate such type parameters.
The first one -
public <T extends Something> T getThing();
will cast the return type to the assigned one and generally not safe (compiler warns you about it) and so may throw ClassCastException. but since it doesn't take any parameter I assume it will always return SomeThing and generic type over there is useless.
The second one I think is also useless as it will allow any type, so better to use Object instead
Such declarations make code less readable and doesn't provide any benefit as well, I'll recommend to use -
public Something getThing();
and
public static void storeData(int id, Object data);