In this question, user #Holger provided an answer that shows an uncommon usage of anonymous classes, which I wasn't aware of.
That answer uses streams, but this question is not about streams, since this anonymous type construction can be used in other contexts, i.e.:
String s = "Digging into Java's intricacies";
Optional.of(new Object() { String field = s; })
.map(anonymous -> anonymous.field) // anonymous implied type
.ifPresent(System.out::println);
To my surprise, this compiles and prints the expected output.
Note: I'm well aware that, since ancient times, it is possible to construct an anonymous inner class and use its members as follows:
int result = new Object() { int incr(int i) {return i + 1; } }.incr(3);
System.out.println(result); // 4
However, this is not what I'm asking here. My case is different, because the anonymous type is propagated through the Optional method chain.
Now, I can imagine a very useful usage for this feature... Many times, I've needed to issue some map operation over a Stream pipeline while also preserving the original element, i.e. suppose I have a list of people:
public class Person {
Long id;
String name, lastName;
// getters, setters, hashCode, equals...
}
List<Person> people = ...;
And that I need to store a JSON representation of my Person instances in some repository, for which I need the JSON string for every Person instance, as well as each Person id:
public static String toJson(Object obj) {
String json = ...; // serialize obj with some JSON lib
return json;
}
people.stream()
.map(person -> toJson(person))
.forEach(json -> repository.add(ID, json)); // where's the ID?
In this example, I have lost the Person.id field, since I've transformed every person to its corresponding json string.
To circumvent this, I've seen many people use some sort of Holder class, or Pair, or even Tuple, or just AbstractMap.SimpleEntry:
people.stream()
.map(p -> new Pair<Long, String>(p.getId(), toJson(p)))
.forEach(pair -> repository.add(pair.getLeft(), pair.getRight()));
While this is good enough for this simple example, it still requires the existence of a generic Pair class. And if we need to propagate 3 values through the stream, I think we could use a Tuple3 class, etc. Using an array is also an option, however it's not type safe, unless all the values are of the same type.
So, using an implied anonymous type, the same code above could be rewritten as follows:
people.stream()
.map(p -> new Object() { Long id = p.getId(); String json = toJson(p); })
.forEach(it -> repository.add(it.id, it.json));
It is magic! Now we can have as many fields as desired, while also preserving type safety.
While testing this, I wasn't able to use the implied type in separate lines of code. If I modify my original code as follows:
String s = "Digging into Java's intricacies";
Optional<Object> optional = Optional.of(new Object() { String field = s; });
optional.map(anonymous -> anonymous.field)
.ifPresent(System.out::println);
I get a compilation error:
Error: java: cannot find symbol
symbol: variable field
location: variable anonymous of type java.lang.Object
And this is to be expected, because there's no member named field in the Object class.
So I would like to know:
Is this documented somewhere or is there something about this in the JLS?
What limitations does this have, if any?
Is it actually safe to write code like this?
Is there a shorthand syntax for this, or is this the best we can do?
This kind of usage has not been mentioned in the JLS, but, of course, the specification doesn’t work by enumerating all possibilities, the programming language offers. Instead, you have to apply the formal rules regarding types and they make no exceptions for anonymous types, in other words, the specification doesn’t say at any point, that the type of an expression has to fall back to the named super type in the case of anonymous classes.
Granted, I could have overlooked such a statement in the depths of the specification, but to me, it always looked natural that the only restriction regarding anonymous types stems from their anonymous nature, i.e. every language construct requiring referring to the type by name, can’t work with the type directly, so you have to pick a supertype.
So if the type of the expression new Object() { String field; } is the anonymous type containing the field “field”, not only the access new Object() { String field; }.field will work, but also Collections.singletonList(new Object() { String field; }).get(0).field, unless an explicit rule forbids it and consistently, the same applies to lambda expressions.
Starting with Java 10, you can use var to declare local variables whose type is inferred from the initializer. That way, you can now declare arbitrary local variables, not only lambda parameters, having the type of an anonymous class. E.g., the following works
var obj = new Object() { int i = 42; String s = "blah"; };
obj.i += 10;
System.out.println(obj.s);
Likewise, we can make the example of your question work:
var optional = Optional.of(new Object() { String field = s; });
optional.map(anonymous -> anonymous.field).ifPresent(System.out::println);
In this case, we can refer to the specification showing a similar example indicating that this is not an oversight but intended behavior:
var d = new Object() {}; // d has the type of the anonymous class
and another one hinting at the general possibility that a variable may have a non-denotable type:
var e = (CharSequence & Comparable<String>) "x";
// e has type CharSequence & Comparable<String>
That said, I have to warn about overusing the feature. Besides the readability concerns (you called it yourself an “uncommon usage”), each place where you use it, you are creating a distinct new class (compare to the “double brace initialization”). It’s not like an actual tuple type or unnamed type of other programming languages that would treat all occurrences of the same set of members equally.
Also, instances created like new Object() { String field = s; } consume twice as much memory as needed, as it will not only contain the declared fields, but also the captured values used to initialize the fields. In the new Object() { Long id = p.getId(); String json = toJson(p); } example, you pay for the storage of three references instead of two, as p has been captured. In a non-static context, anonymous inner class also always capture the surrounding this.
Absolutely not an answer, but more of 0.02$.
That is possible because lambdas give you a variable that is inferred by the compiler; it is inferred from the context. That is why it is only possible for types that are inferred, not for types we can declare.
The compiler can deduce the type as being anonymous, it's just that it can't express it so that we could use it by name. So the information is there, but due to the language restrictions we can't get to it.
It's like saying :
Stream<TypeICanUseButTypeICantName> // Stream<YouKnowWho>?
It does not work in your last example because you have obviously told the compiler the type to be : Optional<Object> optional thus breaking anonymous type inference.
These anonymous types are now (java-10 wise) available in a much simpler way too:
var x = new Object() {
int y;
int z;
};
int test = x.y;
Since var x is inferred by the compiler, int test = x.y; will work also
Is this documented somewhere or is there something about this in the JLS?
I think it is not a special case in anonymous class that need to introduced into JLS . as you have mentioned in your question you can access anonymous class members directly, e.g: incr(3).
First, let's look at a local class example instead, this will represent why chain with the anonymous class could access its members. for example:
#Test
void localClass() throws Throwable {
class Foo {
private String foo = "bar";
}
Foo it = new Foo();
assertThat(it.foo, equalTo("bar"));
}
As we can see, a local class members can be accessed out of its scope even if its members is private.
As #Holger has mentioned above in his answer, the compiler will create an inner class like as EnclosingClass${digit} for each anonymous class. so Object{...} has it own type that derived from Object. due to the chain methods return it own type EnclosingClass${digit} rather than the type which is derived from Object. this is why you chain the anonymous class instance could works fine.
#Test
void chainingAnonymousClassInstance() throws Throwable {
String foo = chain(new Object() { String foo = "bar"; }).foo;
assertThat(foo,equalTo("bar"));
}
private <T> T chain(T instance) {
return instance;
}
Due to we can't reference anonymous class directly, so when we break the chain methods into two lines we actually reference the type Object which is derived from.
AND the rest question #Holger has answered.
Edit
we can conclude that this construction is possible as long as the anonymous type is represented by a generic type variable?
I'm sorry I can't find the JLS reference again since my English is bad. but I can tell you that it does. you can using javap command to see the details. for example:
public class Main {
void test() {
int count = chain(new Object() { int count = 1; }).count;
}
<T> T chain(T it) {
return it;
}
}
and you can see that checkcast instruction has invoked below:
void test();
descriptor: ()V
0: aload_0
1: new #2 // class Main$1
4: dup
5: aload_0
6: invokespecial #3 // Method Main$1."<init>":(LMain;)V
9: invokevirtual #4 // Method chain:(Ljava/lang/Object;)Ljava/lang/Object;
12: checkcast #2 // class Main$1
15: getfield #5 // Field Main$1.count:I
18: istore_1
19: return
Related
This question already has answers here:
What does the type parameter <T> in the method definition mean? [duplicate]
(1 answer)
What are Generics in Java? [closed]
(3 answers)
Java Generics: Generic type defined as return type only
(6 answers)
Understanding generic parameters with void return types
(5 answers)
Closed 10 months ago.
I have the following class which builds:
public class Test<T> {
public T DoSomething(T value) {
return value;
}
}
I can also define it like this class like this (notice the extra in the DoSomething signature (which also builds):
public class Test<T> {
public <T> T DoSomething(T value) {
return value;
}
}
What is its purpose and when do I need to include it? I am asking about the additional <T> in the return type, not what generics are.
Maybe this will clear it up. The notation <T> declares a type variable.
So we have one variable T at the class level, and a redeclaration of that same symbol for a particular method.
class Test<T> {
<T> T doSomething(T value) {
// <T> declares a new type variable for this one method
System.out.println("Type of value: " + value.getClass().getSimpleName());
return value;
}
T doSomethingElse(T value) {
// T is not redeclared here, thus is the type from the class declaration
System.out.println("Type of value: " + value.getClass().getSimpleName());
return value;
}
public static void main(String... a) {
Test<String> t = new Test<>();
t.doSomething(42);
t.doSomething("foo"); // also works
t.doSomething(t); // contrived, but still works
t.doSomethingElse("hi");
t.doSomethingElse(42); // errors because the type `T` is bound to `String` by the declaration `Test<String> t`
}
}
In main, I create a Test<String> so the class-level T is String. This applies to my method doSomethingElse.
But for doSomething, T is redeclared. If I call the method with an Integer arg, then T for that case is Integer.
Really, it would have been better to call the second type variable anything else at all, on the declaration of doSomething. U, for example.
(In most cases, I actually favour giving useful names to type variables, not just single letters).
The concept is known as a generic method (docs.oracle.com).
In the code presented, we have an especially tricky case of generics since we have two generic parameters with the same name:
the <T> on the class-level: public class Test<T>, and
the <T> on the method-level: public <T> T DoSomething(T value)
The latter hides the former within the scope of the method DoSomething(...), just like a local variable would hide an instance field with the same name. In general, I would advice against this type of "hiding" since it makes the code harder to read and understand. Thus, for the rest of the discussion we will work with this (slightly modified) version of the code:
public class Test<T> {
public T doSomethingWithT(T t) {
return t;
}
public <U> U doSomethingWithU(U u) {
return u;
}
}
The scope of the class-level generic parameter T is for the whole class, while the scope of the method-level generic parameter U is only for the one method it is delared on. This will lead to the following observation:
// T is bound to type String for the instance testString:
final Test<String> testString = new Test<>();
final String tString = testString.doSomethingWithT("Hello");
System.out.println(tString); // prints "Hello"
// will not compile since 1 is not a String:
// int tInt = testString.doSomethingWithT(1);
// For this one invocation of doSomethingWithU(...), U is bound to
// type String:
final String uString = testString.doSomethingWithU("World!");
System.out.println(uString); // prints "World!"
// for this one invocation of doSomethingWithU(...), U is bound to
// type Integer:
final int uInt = testString.doSomethingWithU(1);
System.out.println(uInt); // prints "1"
Ideone demo
Notice that, although doSomethingWithU(...) is a generic method, we did not have to specify the generic parameter, the compiler inferred the type for us. While seldom used, we can also explicitly specify the generic parameter for thie method:
final Test<String> testString = new Test<>();
final Number number = testString.<Number>doSomethingWithU(1);
System.out.println(number);
Ideone demo
(In this example, the explicit generic parameter is not necessary, the code works without it aswell, but there are rare cases where this may be useful or even necessary.)
The following is not strictly necessary to understand generic methods, but more of a curiosity one might find in code and is meant to prime the reader that it is bad practice, should not be used and removed when seen.
It should also be mentioned that the JLS allows us to add generic method parameters on method invocations that do not have any generic parameter. Those parameter do not have any effect:
Object o = new Object();
// Method "hashCode()" on "Object" has not generic parameters, one
// can "add" one to the method invocation, it has no effect on the
// semantics, however
int hash = o.<String>hashCode();
Ideone demo
A remark on the code: In Java, methods should be written in camelCase instead of CamelCase (DoSomething(...) -> doSomething(...))
This is based on this question. Consider this example where a method returns a Consumer based on a lambda expression:
public class TestClass {
public static void main(String[] args) {
MyClass m = new MyClass();
Consumer<String> fn = m.getConsumer();
System.out.println("Just to put a breakpoint");
}
}
class MyClass {
final String foo = "foo";
public Consumer<String> getConsumer() {
return bar -> System.out.println(bar + foo);
}
}
As we know, it's not a good practice to reference a current state inside a lambda when doing functional programming, one reason is that the lambda would capture the enclosing instance, which will not be garbage collected until the lambda itself is out of scope.
However, in this specific scenario related to final strings, it seems the compiler could have just enclosed the constant (final) string foo (from the constant pool) in the returned lambda, instead of enclosing the whole MyClass instance as shown below while debugging (placing the breaking at the System.out.println). Does it have to do with the way lambdas are compiled to a special invokedynamic bytecode?
In your code, bar + foo is really shorthand for bar + this.foo; we're just so used to the shorthand that we forget we are implicitly fetching an instance member. So your lambda is capturing this, not this.foo.
If your question is "could this feature have been implemented differently", the answer is "probably yes"; we could have made the specification/implementation of lambda capture arbitrarily more complicated in the aim of providing incrementally better performance for a variety of special cases, including this one.
Changing the specification so that we captured this.foo instead of this wouldn't change much in the way of performance; it would still be a capturing lambda, which is a much bigger cost consideration than the extra field dereference. So I don't see this as providing a real performance boost.
If the lambda was capturing foo instead of this, you could in some cases get a different result. Consider the following example:
public class TestClass {
public static void main(String[] args) {
MyClass m = new MyClass();
m.consumer.accept("bar2");
}
}
class MyClass {
final String foo;
final Consumer<String> consumer;
public MyClass() {
consumer = getConsumer();
// first call to illustrate the value that would have been captured
consumer.accept("bar1");
foo = "foo";
}
public Consumer<String> getConsumer() {
return bar -> System.out.println(bar + foo);
}
}
Output:
bar1null
bar2foo
If foo was captured by the lambda, it would be captured as null and the second call would print bar2null. However since the MyClass instance is captured, it prints the correct value.
Of course this is ugly code and a bit contrived, but in more complex, real-life code, such an issue could somewhat easily occur.
Note that the only true ugly thing, is that we are forcing a read of the to-be-assigned foo in the constructor, through the consumer. Building the consumer itself is not expected to read foo at that time, so it is still legit to build it before assigning foo – as long as you don't use it immediately.
However the compiler will not let you initialize the same consumer in the constructor before assigning foo – probably for the best :-)
You are right, it technically could do so, because the field in question is final, but it doesn't.
However, if it is a problem that the returned lambda retains the reference to the MyClass instance, then you can easily fix it yourself:
public Consumer<String> getConsumer() {
String f = this.foo;
return bar -> System.out.println(bar + f);
}
Note, that if the field hadn't been final, then your original code would use the actual value at the time the lambda is executed, while the code listed here would use the value as of the time the getConsumer() method is executed.
Note that for any ordinary Java access to a variable being a compile-time constant, the constant value takes place, so, unlike some people claimed, it is immune to initialization order issues.
We can demonstrate this by the following example:
abstract class Base {
Base() {
// bad coding style don't do this in real code
printValues();
}
void printValues() {
System.out.println("var1 read: "+getVar1());
System.out.println("var2 read: "+getVar2());
System.out.println("var1 via lambda: "+supplier1().get());
System.out.println("var2 via lambda: "+supplier2().get());
}
abstract String getVar1();
abstract String getVar2();
abstract Supplier<String> supplier1();
abstract Supplier<String> supplier2();
}
public class ConstantInitialization extends Base {
final String realConstant = "a constant";
final String justFinalVar; { justFinalVar = "a final value"; }
ConstantInitialization() {
System.out.println("after initialization:");
printValues();
}
#Override String getVar1() {
return realConstant;
}
#Override String getVar2() {
return justFinalVar;
}
#Override Supplier<String> supplier1() {
return () -> realConstant;
}
#Override Supplier<String> supplier2() {
return () -> justFinalVar;
}
public static void main(String[] args) {
new ConstantInitialization();
}
}
It prints:
var1 read: a constant
var2 read: null
var1 via lambda: a constant
var2 via lambda: null
after initialization:
var1 read: a constant
var2 read: a final value
var1 via lambda: a constant
var2 via lambda: a final value
So, as you can see, the fact that the write to the realConstant field did not happen yet when the super constructor is executed, no uninitialized value is seen for the true compile-time constant, even when accessing it via lambda expression. Technically, because the field isn’t actually read.
Also, nasty Reflection hacks have no effect on ordinary Java access to compile-time constants, for the same reason. The only way to read such a modified value back, is via Reflection:
public class TestCapture {
static class MyClass {
final String foo = "foo";
private Consumer<String> getFn() {
//final String localFoo = foo;
return bar -> System.out.println("lambda: " + bar + foo);
}
}
public static void main(String[] args) throws ReflectiveOperationException {
final MyClass obj = new MyClass();
Consumer<String> fn = obj.getFn();
// change the final field obj.foo
Field foo=obj.getClass().getDeclaredFields()[0];
foo.setAccessible(true);
foo.set(obj, "bar");
// prove that our lambda expression doesn't read the modified foo
fn.accept("");
// show that it captured obj
Field capturedThis=fn.getClass().getDeclaredFields()[0];
capturedThis.setAccessible(true);
System.out.println("captured obj: "+(obj==capturedThis.get(fn)));
// and obj.foo contains "bar" when actually read
System.out.println("via Reflection: "+foo.get(capturedThis.get(fn)));
// but no ordinary Java access will actually read it
System.out.println("ordinary field access: "+obj.foo);
}
}
It prints:
lambda: foo
captured obj: true
via Reflection: bar
ordinary field access: foo
which shows us two things,
Reflection also has no effect on compile-time constants
The surrounding object has been captured, despite it won’t be used
I’d be happy to find an explanation like, “any access to an instance field requires the lambda expression to capture the instance of that field (even if the field is not actually read)”, but unfortunately I couldn’t find any statement regarding capturing of values or this in the current Java Language Specification, which is a bit frightening:
We got used to the fact that not accessing instance fields in a lambda expression will create an instance which doesn’t have a reference to this, but even that isn’t actually guaranteed by the current specification. It’s important that this omission gets fixed soon…
There are several similar questions on SO about method reference to local class constructor, but I'd like to clarify slightly other thing. Consider following piece of code:
static Callable gen(int i) {
class X {
int x = i;
public String toString() { return "" + x; }
}
return X::new;
}
...
System.out.println(gen(0).call());
System.out.println(gen(1).call());
Obviously this will printout
0
1
It turns out, that X class has constructor of the form ...$X(int) (you can find it via X.class.getDeclaredConstructors()).
But what is interesting here, is that returned lambdas (or method references) aren't simple reference to constructor ...$X(int) like, for example, Integer::new. They internally invoke this constructor ...$X(int) with predefined argument (0 or 1).
So, I'm not sure, but looks like this kind of method reference is not precisely described in JLS. And there is not other way except this case for local classes, to produce such kind of lambdas (with predefined constructor arguments). Who can help clarify this?
To be precise:
where is in JLS such kind of method reference described?
is any other way to create such method reference to arbitrary class constructor with predefined arguments?
You are focusing too much on irrelevant low level details. On the byte code level, there might be a constructor accepting an int parameter, but on the language level, you didn’t specify an explicit constructor, hence, there will be a default constructor without any arguments, as with any other class.
This should become clear when you write the pre-Java 8 code:
static Callable<Object> gen(int i) {
class X {
int x = i;
public String toString() { return "" + x; }
}
X x=new X();
…
You instantiate X by its default constructor, not taking any arguments. Your local class captures the value of i, but how it does so on the low level, i.e. that X’ constructor has a synthetic int parameter and the new expression will pass the value of i to it, is an implementation detail.
You can even add an explicit constructor as
X() {}
without changing anything.
Obviously, you can also write the expression new X() inside a lambda expression here, as expressions don’t change their semantic when being placed inside a lambda expression:
return () -> new X();
or use it’s short-hand form, the method reference
return X::new;
There is nothing special about it, the behavior is understandable even without referring to the specification, if you forget about the distracting low level details. X may capture as many local variables as you like, the constructor’s number of parameters doesn’t change (on the language level).
This behaviour is defined in the JLS section §15.13.3:
If the form is ClassType :: [TypeArguments] new, the body of the invocation method has the effect of a class instance creation expression of the form new [TypeArguments] ClassType(A1, ..., An), where the arguments A1, ..., An are the formal parameters of the invocation method, and where:
The enclosing instance for the new object, if any, is derived from the site of the method reference expression, as specified in §15.9.2.
The constructor to invoke is the constructor that corresponds to the compile-time declaration of the method reference (§15.13.1).
Although this talks about enclosing instances, captured variables and parameters are not mentioned in §15.13.3.
As for your second question, you need to manually capture and change the parameter:
static Callable gen(int i) {
final int i1 = someCondition() ? i : 42;
class X {
int x = i1; // <-
public String toString() { return "" + x; }
}
return X::new;
}
Why does the following code compile?
The method IElement.getX(String) returns an instance of the type IElement or of subclasses thereof. The code in the Main class invokes the getX(String) method. The compiler allows to store the return value to a variable of the type Integer (which obviously is not in the hierarchy of IElement).
public interface IElement extends CharSequence {
<T extends IElement> T getX(String value);
}
public class Main {
public void example(IElement element) {
Integer x = element.getX("x");
}
}
Shouldn't the return type still be an instance of IElement - even after the type erasure?
The bytecode of the getX(String) method is:
public abstract <T extends IElement> T getX(java.lang.String);
flags: ACC_PUBLIC, ACC_ABSTRACT
Signature: #7 // <T::LIElement;>(Ljava/lang/String;)TT;
Edit: Replaced String consistently with Integer.
This is actually a legitimate type inference*.
We can reduce this to the following example (Ideone):
interface Foo {
<F extends Foo> F bar();
public static void main(String[] args) {
Foo foo = null;
String baz = foo.bar();
}
}
The compiler is allowed to infer a (nonsensical, really) intersection type String & Foo because Foo is an interface. For the example in the question, Integer & IElement is inferred.
It's nonsensical because the conversion is impossible. We can't do such a cast ourselves:
// won't compile because Integer is final
Integer x = (Integer & IElement) element;
Type inference basically works with:
a set of inference variables for each of a method's type parameters.
a set of bounds that must be conformed to.
sometimes constraints, which are reduced to bounds.
At the end of the algorithm, each variable is resolved to an intersection type based on the bound set, and if they're valid, the invocation compiles.
The process begins in 8.1.3:
When inference begins, a bound set is typically generated from a list of type parameter declarations P1, ..., Pp and associated inference variables α1, ..., αp. Such a bound set is constructed as follows. For each l (1 ≤ l ≤ p):
[…]
Otherwise, for each type T delimited by & in a TypeBound, the bound αl <: T[P1:=α1, ..., Pp:=αp] appears in the set […].
So, this means first the compiler starts with a bound of F <: Foo (which means F is a subtype of Foo).
Moving to 18.5.2, the return target type gets considered:
If the invocation is a poly expression, […] let R be the return type of m, let T be the invocation's target type, and then:
[…]
Otherwise, the constraint formula ‹R θ → T› is reduced and incorporated with [the bound set].
The constraint formula ‹R θ → T› gets reduced to another bound of R θ <: T, so we have F <: String.
Later on these get resolved according to 18.4:
[…] a candidate instantiation Ti is defined for each αi:
Otherwise, where αi has proper upper bounds U1, ..., Uk, Ti = glb(U1, ..., Uk).
The bounds α1 = T1, ..., αn = Tn are incorporated with the current bound set.
Recall that our set of bounds is F <: Foo, F <: String. glb(String, Foo) is defined as String & Foo. This is apparently a legitimate type for glb, which only requires that:
It is a compile-time error if, for any two classes (not interfaces) Vi and Vj, Vi is not a subclass of Vj or vice versa.
Finally:
If resolution succeeds with instantiations T1, ..., Tp for inference variables α1, ..., αp, let θ' be the substitution [P1:=T1, ..., Pp:=Tp]. Then:
If unchecked conversion was not necessary for the method to be applicable, then the invocation type of m is obtained by applying θ' to the type of m.
The method is therefore invoked with String & Foo as the type of F. We can of course assign this to a String, thus impossibly converting a Foo to a String.
The fact that String/Integer are final classes is apparently not considered.
* Note: type erasure is/was completely unrelated to the issue.
Also, while this compiles on Java 7 as well, I think it's reasonable to say we needn't worry about the specification there. Java 7's type inference was essentially a less sophisticated version of Java 8's. It compiles for similar reasons.
As an addendum, while strange, this will likely never cause a problem that was not already present. It's rarely useful to write a generic method whose return type is solely inferred from the return target, because only null can be returned from such a method without casting.
Suppose for example we have some map analog which stores subtypes of a particular interface:
interface FooImplMap {
void put(String key, Foo value);
<F extends Foo> F get(String key);
}
class Bar implements Foo {}
class Biz implements Foo {}
It's already perfectly valid to make an error such as the following:
FooImplMap m = ...;
m.put("b", new Bar());
Biz b = m.get("b"); // casting Bar to Biz
So the fact that we can also do Integer i = m.get("b"); is not a new possibility for error. If we were programming code like this, it was already potentially unsound to begin with.
Generally, a type parameter should only be solely inferred from the target type if there is no reason to bound it, e.g. Collections.emptyList() and Optional.empty():
private static final Optional<?> EMPTY = new Optional<>();
public static<T> Optional<T> empty() {
#SuppressWarnings("unchecked")
Optional<T> t = (Optional<T>) EMPTY;
return t;
}
This is A-OK because Optional.empty() can neither produce nor consume a T.
I have a question about Generics in Java, namely using wildcards. I have an example class GenClass like this:
public class GenClass<E> {
private E var;
public void setVar(E x) {
var = x;
}
public E getVar() {
return var;
}
}
I have another simple class:
public class ExampleClass {
}
I have written the following test class:
public class TestGenClass {
public static void main(String[] str) {
ExampleClass ec = new ExampleClass();
GenClass<ExampleClass> c = new GenClass<ExampleClass>();
c.setVar(ec);
System.out.println(c.getVar()); // OUTPUT: ExampleClass#addbf1
}
}
Now, if I use a wildcard and write in the test class this:
GenClass<?> c = new GenClass<ExampleClass>();
on the place of:
GenClass<ExampleClass> c = new GenClass<ExampleClass>();
the compiler has no problem with this new statement, however, it complains about
c.setVar(ec);
It says that "the method (setVar()) is not applicable for the arguments (ExampleClass)". Why do I get this message?
I thought that the way I have used the wildcard, makes the reference variable c be of type GenClass, which would accept as parameter any class - on the place of E I would have any class. This is just the declaration of the variable. Then I initialize it with
new GenClass<ExampleClass>()
which means that I create an object of type GenClass, which has as parameter a class of type ExampleClass. So, I think that now E in GenClass will be ExampleClass, and I would be able to use the method setVar(), giving it as argument something of type ExampleClass.
This was my assumption and understanding, but it seems that Java does not like it, and I am not right.
Any comment is appreciated, thank you.
This exact situation is covered in the Java Generics Tutorial.
Notice that [with the wildcard], we can still read elements from [the generic Collection] and give them type Object. This is always safe, since whatever the actual type of the collection, it does contain objects. It isn't safe to add arbitrary objects to it however:
Collection<?> c = new ArrayList<String>();
c.add(new Object()); // Compile time error
Since we don't know what the element type of c stands for, we cannot add objects to it. The add() method takes arguments of type E, the element type of the collection. When the actual type parameter is ?, it stands for some unknown type. Any parameter we pass to add would have to be a subtype of this unknown type. Since we don't know what type that is, we cannot pass anything in. The sole exception is null, which is a member of every type.
(emphasis mine)
mmyers has the correct answer, but I just wanted to comment on this part of your question (which sounds like your rationale for wanting to use the wildcard):
I thought that the way I have used the wildcard, makes the reference variable c be of type GenClass, which would accept as parameter any class - on the place of E I would have any class. This is just the declaration of the variable. Then I initialize it with
If you really want to accomplish this, you could do something like without compilation errors:
GenClass<Object> gc = new GenClass<Object>();
gc.setVar(new ExampleClass());
But then again, if you want to declare an instance of GenClass that can contain any type, I'm not sure why you'd want to use generics at all - you could just use the raw class:
GenClass raw = new GenClass();
raw.setVar(new ExampleClass());
raw.setVar("this runs ok");