Java parametrized more then one class. [duplicate] - java

I know there's all sorts of counter-intuitive properties of Java's generic types. Here's one in particular that I don't understand, and which I'm hoping someone can explain to me. When specifying a type parameter for a class or interface, you can bound it so that it must implement multiple interfaces with public class Foo<T extends InterfaceA & InterfaceB>. However, if you're instantiating an actual object, this doesn't work anymore. List<? extends InterfaceA> is fine, but List<? extends InterfaceA & InterfaceB> fails to compile. Consider the following complete snippet:
import java.util.List;
public class Test {
static interface A {
public int getSomething();
}
static interface B {
public int getSomethingElse();
}
static class AandB implements A, B {
public int getSomething() { return 1; }
public int getSomethingElse() { return 2; }
}
// Notice the multiple bounds here. This works.
static class AandBList<T extends A & B> {
List<T> list;
public List<T> getList() { return list; }
}
public static void main(String [] args) {
AandBList<AandB> foo = new AandBList<AandB>(); // This works fine!
foo.getList().add(new AandB());
List<? extends A> bar = new LinkedList<AandB>(); // This is fine too
// This last one fails to compile!
List<? extends A & B> foobar = new LinkedList<AandB>();
}
}
It seems the semantics of bar should be well-defined -- I can't think of any loss of type-safety by allowing an intersection of two types rather than just one. I'm sure there's an explanation though. Does anyone know what it is?

Interestingly, interface java.lang.reflect.WildcardType looks like it supports both upper bounds and lower bounds for a wildcard arg; and each can contain multiple bounds
Type[] getUpperBounds();
Type[] getLowerBounds();
This is way beyond what the language allows. There's a hidden comment in the source code
// one or many? Up to language spec; currently only one, but this API
// allows for generalization.
The author of the interface seems to consider that this is an accidental limitation.
The canned answer to your question is, generics is already too complicated as it is; adding more complexity might prove to be the last straw.
To allow a wildcard to have multiple upper bounds, one has to scan through the spec and make sure the entire system still works.
One trouble I know would be in the type inference. The current inference rules simply can't deal with intersection types. There's no rule to reduce a constraint A&B << C. If we reduced it to
A<<C
or
A<<B
any current inference engine has to go through major overhaul to allow such bifurcation. But the real serious problem is, this allows multiple solutions, but there's no justification to prefer one over another.
However, inference is not essential to type safety; we can simply refuse to infer in this case, and ask programmer to explicitly fill in type arguments. Therefore, difficulty in inference is not a strong argument against intercection types.

From the Java Language Specification:
4.9 Intersection Types
An intersection type takes the form T1 & ... & Tn, n>0, where Ti, 1in, are type expressions. Intersection types arise in the processes of capture conversion (§5.1.10) and type inference (§15.12.2.7). It is not possible to write an intersection type directly as part of a program; no syntax supports this. The values of an intersection type are those objects that are values of all of the types Ti, for 1in.
So why is this not supported? My guess is, what should you do with such a thing? - let's suppose it were possible:
List<? extends A & B> list = ...
Then what should
list.get(0);
return? There's no syntax to capture a return value of A & B. Adding something into such a list would not be possible either, so it's basically useless.

No problem... just declare the type you need in the method signature.
This compiles:
public static <T extends A & B> void main(String[] args) throws Exception
{
AandBList<AandB> foo = new AandBList<AandB>(); // This works fine!
foo.getList().add(new AandB());
List<? extends A> bar = new LinkedList<AandB>(); // This is fine too
List<T> foobar = new LinkedList<T>(); // This compiles!
}

Good question. It took me a while to figure out.
Lets simplify your case: You are trying to do the same as if you declare a class that extends 2 interfaces, and then a variable that has as a type those 2 interfaces, something like this:
class MyClass implements Int1, Int2 { }
Int1 & Int2 variable = new MyClass()
Of course, illegal.
And this is equivalent to what you try to do with generics.
What you are trying to do is:
List<? extends A & B> foobar;
But then, to use foobar, you would need to use a variable of both interfaces this way:
A & B element = foobar.get(0);
Which is not legal in Java. This means, you are declaring the elements of the list as beeing of 2 types simultaneously, and even if our brains can deal with it, Java language cannot.

For what it's worth: if anyone's wondering this because they would truly like to use this in practice, I've worked around it by defining an interface that contains the union of all methods in all the interfaces and class that I'm working with. i.e. I was trying to do the following:
class A {}
interface B {}
List<? extends A & B> list;
which is illegal - so instead I did this:
class A {
<A methods>
}
interface B {
<B methods>
}
interface C {
<A methods>
<B methods>
}
List<C> list;
This still isn't as useful as being able to type something as List<? extends A implements B>, e.g. if someone adds or removes methods to A or B, the typing of the list will not be updated automatically, it requires a manual change to C. But it's worked for my needs.

Related

How does the Java compiler choose the runtime type for a parameterized type with multiple bounds?

I would like to understand better what happens when the Java compiler encounters a call to a method like the one below.
<T extends AutoCloseable & Cloneable>
void printType(T... args) {
System.out.println(args.getClass().getComponentType().getSimpleName());
}
// printType() prints "AutoCloseable"
It is clear to me that there is no type <T extends AutoCloseable & Cloneable> at runtime, so the compiler makes the least wrong thing it can do and creates an array with the type of one of the two bounding interfaces, discarding the other one.
Anyway, if the order of the interfaces is switched, the result is still the same.
<T extends Cloneable & AutoCloseable>
void printType(T... args) {
System.out.println(args.getClass().getComponentType().getSimpleName());
}
// printType() prints "AutoCloseable"
This led me to do some more investigation and see what happens when the interfaces change.
It seems to me that the compiler uses some kind of strict order rule to decide which interface is the most important, and the order the interfaces appear in code plays no role.
<T extends AutoCloseable & Runnable> // "AutoCloseable"
<T extends Runnable & AutoCloseable> // "AutoCloseable"
<T extends AutoCloseable & Serializable> // "Serializable"
<T extends Serializable & AutoCloseable> // "Serializable"
<T extends SafeVarargs & Serializable> // "SafeVarargs"
<T extends Serializable & SafeVarargs> // "SafeVarargs"
<T extends Channel & SafeVarargs> // "Channel"
<T extends SafeVarargs & Channel> // "Channel"
<T extends AutoCloseable & Channel & Cloneable & SafeVarargs> // "Channel"
Question:
How does the Java compiler determine the component type of a varargs array of a parameterized type when there are multiple bounds?
I'm not even sure if the JLS says anything about this, and none of the information I found by googling covers this particular topic.
Typically, when the compiler encounters a call to a parameterised method, it can infers the type (JSL 18.5.2) and can create a correctly typed vararg array in the caller.
The rules are mostly technical ways of saying "find all possible input types and check them" (cases like void, ternary operator, or lambda).
The rest is common sense, such as using the most specific common base class (JSL 4.10.4).
Example:
public class Test {
private static class A implements AutoCloseable, Runnable {
#Override public void close () throws Exception {}
#Override public void run () {} }
private static class B implements AutoCloseable, Runnable {
#Override public void close () throws Exception {}
#Override public void run () {} }
private static class C extends B {}
private static <T extends AutoCloseable & Runnable> void printType( T... args ) {
System.out.println( args.getClass().getComponentType().getSimpleName() );
}
public static void main( String[] args ) {
printType( new A() ); // A[] created here
printType( new B(), new B() ); // B[] created here
printType( new B(), new C() ); // B[] which is the common base class
printType( new A(), new B() ); // AutoCloseable[] - well...
printType(); // AutoCloseable[] - same as above
}
}
JSL 18.2 dictates how to process the constrains for type inference, such as AutoCloseable & Channel is reduced to just Channel.
But the rules do not help answer this question.
Getting AutoCloseable[] from the call may look weird, of course, because we can't do that with Java code.
But in reality the actual type doesn't matter.
At the language level, args is T[], where T is a "virtual type" that is both A and B (JSL 4.9).
The compiler just needs to make sure its usages meet all constrains, and then it knows the logic is sound and there will be no type error (this is how Java generic is designed).
Of course the compiler still need to make a real array, and for the purpose it creates a "generic array".
Thus the warning "unchecked generic array creation" (JLS 15.12.4.2).
In other words, as long as you pass in only AutoCloseable & Runnable, and calls only Object, AutoCloseable, and Runnable methods in printType, the actual array type does not matter.
In fact, printType's bytecodes would be the same, regardless of what kind of array is passed in.
Since printType doesn't care the vararg array type, getComponentType() doesn't and shouldn't matter.
If you want to get the interfaces, try getGenericInterfaces() which returns an array.
Because of type erasure (JSL 4.6), the order of interfaces of T does affect (JSL 13.1) compiled method signature and bytecode. The first interface AutoClosable will be used, e.g. no type check will be done when AutoClosable.close() is called in printType.
But this is unrelated with type interference of method calls of the question, i.e. why AutoClosable[] is created and passed. Many type safeties are checked before erasure, thus the order does not affect type safety. This I think is part of what JSL means by "The order of types... is only significant in that the erasure ... is determined by the first type" (JSL 4.4). It means the order is otherwise insignificant.
Regardless, this erasure rule does cause corner cases such as adding printType(AutoCloseable[]) triggers compile error, when adding printType( Runnable[]) does not. I believe this is an unexpected side effect and is really out of scope.
P.S. Digging too deep may cause insanity, considering that I think I am Ovis aries, view source into assembly, and struggles to answer in English instead of J̶́S͡L̴̀. My sanity score is b҉ȩyon̨d͝ r̨̡͝e̛a̕l̵ numb͟ers͡. T͉͎̫͠u͍r̟̦͝n̪͓͓̭̯̕ ̱̱̞̠̬ͅb̯̠̞̩͎a̘̜̯c̠̮k. ̠̝͕b̭̳͠͡ͅẹ̡̬̦̙f͓͉̼̻o̼͕̱͎̬̟̪r҉͏̛̣̼͙͍͍̠̫͙ȩ̵̮̟̱̫͚ ̢͚̭̹̳̣̩̱͠..t̷҉̛̫͔͉̥͎̬ò̢̱̪͉̲͎͜o̭͈̩̖̭̬.. ̮̘̯̗l̷̞͍͙̻̻͙̯̣͈̳͓͇a̸̢̢̰͓͓̪̳͉̯͉̼͝͝t̛̥̪̣̹̬͔̖͙̬̩̝̰͕̖̮̰̗͓̕͢ę̴̹̯̟͉̲͔͉̳̲̣͝͞.̬͖͖͇͈̤̼͖́͘͢.͏̪̱̝̠̯̬͍̘̣̩͉̯̹̼͟͟͠.̨͠҉̬̘̹ͅ
This is a very interesting question. The relevant part of the specification is §15.12.4.2. Evaluate Arguments:
If the method being invoked is a variable arity method m, it necessarily has n > 0 formal parameters. The final formal parameter of m necessarily has type T[] for some T, and m is necessarily being invoked with k ≥ 0 actual argument expressions.
If m is being invoked with k ≠ n actual argument expressions, or, if m is being invoked with k = n actual argument expressions and the type of the k'th argument expression is not assignment compatible with T[], then the argument list (e1, ..., en-1, en, ..., ek) is evaluated as if it were written as (e1, ..., en-1, new |T[]| { en, ..., ek }), where |T[]| denotes the erasure (§4.6) of T[].
It’s interestingly vague about what “some T” actually is. The simplest and most straight-forward solution would be the declared parameter type of the invoked method; that would be assignment compatible and there is no actual advantage of using a different type. But, as we know, javac doesn’t go that route and uses some sort of common base type of all arguments or picks some of the bounds according to some unknown rule for the array’s element type. Nowadays you might even find some applications in the wild relying on this behavior, assuming to get some information about the actual T at runtime by inspecting the array type.
This leads to some interesting consequences:
static AutoCloseable[] ARR1;
static Serializable[] ARR2;
static <T extends AutoCloseable & Serializable> void method(T... args) {
ARR1 = args;
ARR2 = args;
}
public static void main(String[] args) throws Exception {
method(null, null);
ARR2[0] = "foo";
ARR1[0].close();
}
javac decides to create an array of the actual type Serializable[] here, despite the method’s parameter type is AutoClosable[] after applying type erasure, which is the reason why the assignment of a String is possible at runtime. So it will only fail at the last statement, when attempting to invoke the close() method on it with
Exception in thread "main" java.lang.IncompatibleClassChangeError: Class java.lang.String does not implement the requested interface java.lang.AutoCloseable
It’s blaming the class String here, though we could have put any Serializable object into the array as the actual issue is that a static field of the formal declared type AutoCloseable[] refers to an object of the actual type Serializable[].
Though it is a specific behavior of the HotSpot JVM that we ever got this far, as its verifier does not check assignments when interface types are involved (including arrays of interface types) but defers the check whether the actual class implements the interface to the last possible moment, when trying to actually invoke an interface method on it.
Interestingly, type casts are strict, when they appear in the class file:
static <T extends AutoCloseable & Serializable> void method(T... args) {
AutoCloseable[] a = (AutoCloseable[])args; // actually removed by the compiler
a = (AutoCloseable[])(Object)args; // fails at runtime
}
public static void main(String[] args) throws Exception {
method();
}
While javac’s decision for Serializable[] in the above example seems arbitrary, it should be clear that regardless of which type it chooses, one of the field assignments would only be possible in a JVM with lax type checking. We could also highlight the more fundamental nature of the problem:
// erased to method1(AutoCloseable[])
static <T extends AutoCloseable & Serializable> void method1(T... args) {
method2(args); // valid according to generic types
}
// erased to method2(Serializable[])
static <T extends Serializable & AutoCloseable> void method2(T... args) {
}
public static void main(String[] args) throws Exception {
// whatever array type the compiler picks, it would violate one of the erased types
method1();
}
While this doesn’t actually answer the question what actual rule javac uses (besides that it uses “some T”), it emphasizes the importance of treating arrays created for varargs parameter as intended: a temporary storage (don’t assign to fields) of an arbitrary type you better don’t care about.

Java generics - wildcards extending multiple types [duplicate]

I know there's all sorts of counter-intuitive properties of Java's generic types. Here's one in particular that I don't understand, and which I'm hoping someone can explain to me. When specifying a type parameter for a class or interface, you can bound it so that it must implement multiple interfaces with public class Foo<T extends InterfaceA & InterfaceB>. However, if you're instantiating an actual object, this doesn't work anymore. List<? extends InterfaceA> is fine, but List<? extends InterfaceA & InterfaceB> fails to compile. Consider the following complete snippet:
import java.util.List;
public class Test {
static interface A {
public int getSomething();
}
static interface B {
public int getSomethingElse();
}
static class AandB implements A, B {
public int getSomething() { return 1; }
public int getSomethingElse() { return 2; }
}
// Notice the multiple bounds here. This works.
static class AandBList<T extends A & B> {
List<T> list;
public List<T> getList() { return list; }
}
public static void main(String [] args) {
AandBList<AandB> foo = new AandBList<AandB>(); // This works fine!
foo.getList().add(new AandB());
List<? extends A> bar = new LinkedList<AandB>(); // This is fine too
// This last one fails to compile!
List<? extends A & B> foobar = new LinkedList<AandB>();
}
}
It seems the semantics of bar should be well-defined -- I can't think of any loss of type-safety by allowing an intersection of two types rather than just one. I'm sure there's an explanation though. Does anyone know what it is?
Interestingly, interface java.lang.reflect.WildcardType looks like it supports both upper bounds and lower bounds for a wildcard arg; and each can contain multiple bounds
Type[] getUpperBounds();
Type[] getLowerBounds();
This is way beyond what the language allows. There's a hidden comment in the source code
// one or many? Up to language spec; currently only one, but this API
// allows for generalization.
The author of the interface seems to consider that this is an accidental limitation.
The canned answer to your question is, generics is already too complicated as it is; adding more complexity might prove to be the last straw.
To allow a wildcard to have multiple upper bounds, one has to scan through the spec and make sure the entire system still works.
One trouble I know would be in the type inference. The current inference rules simply can't deal with intersection types. There's no rule to reduce a constraint A&B << C. If we reduced it to
A<<C
or
A<<B
any current inference engine has to go through major overhaul to allow such bifurcation. But the real serious problem is, this allows multiple solutions, but there's no justification to prefer one over another.
However, inference is not essential to type safety; we can simply refuse to infer in this case, and ask programmer to explicitly fill in type arguments. Therefore, difficulty in inference is not a strong argument against intercection types.
From the Java Language Specification:
4.9 Intersection Types
An intersection type takes the form T1 & ... & Tn, n>0, where Ti, 1in, are type expressions. Intersection types arise in the processes of capture conversion (§5.1.10) and type inference (§15.12.2.7). It is not possible to write an intersection type directly as part of a program; no syntax supports this. The values of an intersection type are those objects that are values of all of the types Ti, for 1in.
So why is this not supported? My guess is, what should you do with such a thing? - let's suppose it were possible:
List<? extends A & B> list = ...
Then what should
list.get(0);
return? There's no syntax to capture a return value of A & B. Adding something into such a list would not be possible either, so it's basically useless.
No problem... just declare the type you need in the method signature.
This compiles:
public static <T extends A & B> void main(String[] args) throws Exception
{
AandBList<AandB> foo = new AandBList<AandB>(); // This works fine!
foo.getList().add(new AandB());
List<? extends A> bar = new LinkedList<AandB>(); // This is fine too
List<T> foobar = new LinkedList<T>(); // This compiles!
}
Good question. It took me a while to figure out.
Lets simplify your case: You are trying to do the same as if you declare a class that extends 2 interfaces, and then a variable that has as a type those 2 interfaces, something like this:
class MyClass implements Int1, Int2 { }
Int1 & Int2 variable = new MyClass()
Of course, illegal.
And this is equivalent to what you try to do with generics.
What you are trying to do is:
List<? extends A & B> foobar;
But then, to use foobar, you would need to use a variable of both interfaces this way:
A & B element = foobar.get(0);
Which is not legal in Java. This means, you are declaring the elements of the list as beeing of 2 types simultaneously, and even if our brains can deal with it, Java language cannot.
For what it's worth: if anyone's wondering this because they would truly like to use this in practice, I've worked around it by defining an interface that contains the union of all methods in all the interfaces and class that I'm working with. i.e. I was trying to do the following:
class A {}
interface B {}
List<? extends A & B> list;
which is illegal - so instead I did this:
class A {
<A methods>
}
interface B {
<B methods>
}
interface C {
<A methods>
<B methods>
}
List<C> list;
This still isn't as useful as being able to type something as List<? extends A implements B>, e.g. if someone adds or removes methods to A or B, the typing of the list will not be updated automatically, it requires a manual change to C. But it's worked for my needs.

Which type safety would have lost, had Generics supported the sub-typing? [duplicate]

This question already has answers here:
Is List<Dog> a subclass of List<Animal>? Why are Java generics not implicitly polymorphic?
(19 answers)
Closed 6 years ago.
Consider the snippet:
Number[] numbers = {1, 2.3, 4.5f, 6000000000000000000L};
It's perfectly okay to do the above, Number is an abstract class.
Going ahead,
List<Long> listLong = new ArrayList<Long>();
listLong.add(Long.valueOf(10));
List<Number> listNumbers = listLong; // compiler error - LINE 3
listNumbers.add(Double.valueOf(1.23));
Had Line 3 was designed to be compiled successfully,
we would end up with a List of Numbers, i.e,
for(Number num: listNumbers ){
System.out.println(num);
}
// 10
// 1.23
which are all numbers.
I came across this in a book,
Generics doesn’t support sub-typing because it will cause issues in
achieving type safety. That’s why List<T> is not considered as a
subtype of List<S> where S is the super-type of T
Which type safety would have lost in this specific case as discussed above, were the Line 3 was to be compile successfully?
List<Long> listLong = new ArrayList<Long>();
List<Number> listNumbers = listLong;
So, listNumbers and listLong would be two references to the same list, if that was possible, right?
listNumbers.add(Double.valueOf(1.23));
So, you would be able to add a Double to that list. listLong, of type List<Long>, would thus contain a Double. The type-safety would thus be broken.
If that was the case, then we could add other different subtypes of Number into listNumbers, which must be forbidden.
Imagine you're now inserting objects of type Double and Long, and later you try to use Long#reverse. Your code will compile but of course will fail at runtime (bad) the first Double it'll come through.
Let's use an example with a non-abstract base class:
public class Human {
public string getName() {
// ...
}
}
public class Student extends Human {
public void learn(Subject subject) {
// ...
}
}
public class Teacher extends Human {
public void teach(Subject subject) {
// ...
}
}
At any place where a Human is expected, a Student or Teacher will do just as well, as they fully implement the Human interface. (In this case, that getName() can be called on them.) Java inheritance guarantees that this is the case technically. Making it work semantically is the class author's job, so that his code fulfils the Liskov substitution principle.
So doesn't this mean that we can also substitute Collection<Teacher> where a Collection<Human> is expected? Not always. Consider the following method:
public class Human {
// ...
public void join(Set<Human> party) {
party.add(this);
}
}
Now, if Java allowed a Set<Student> to be passed as party, any attempts of non-Student Humans to join that party would have to fail at runtime.
As a general rule, a container of a subtype is unsuitable if the receiver (callee in case of a function argument, caller in case of a function return value) wants to put something into it, but acceptable if the receiver only want to take stuff out and use it. A container of a supertype is unsuitable if the receiver wants to take stuff out and use it, but acceptable if the receiver only ever puts stuff into it. As a result, if the receiver both takes stuff out of the collection and puts stuff into the collection, they usually must require a collection of a fixed type.
Our join method only puts Humans into the party, so we could also allow a Set<Object> or a non-generic Set or equivalently a Set<?>. Java allows us to do that with lower-bounded wildcards:
public class Human {
// ...
public void join(Set<? super Human> party) {
party.add(this);
}
}
For opening up the possibilities towards subclasses, there's upper-bounded wildcards:
public class Teacher extends Human {
public void teach(Subject subject, Set<? extends Student> schoolClass) {
for (Student student : class) {
student.learn(subject);
}
}
}
Now, if we ever subclass Student, the passed schoolClass can be a Set of that subtype, too.
The concept you are referring to is variance.
In other words, if S is a supertype of T, is List<S> a subtype, supertype, equal type, or unreleted to List<T>?
The answer for List -- and all other Java generics* -- is "unrelated", i.e. invariant.
class SuperType {}
class Type extends SuperType {}
class SubType extends Type {}
List<Type> list = ...
List<SuperType> superList = list;
superList.add(new SuperType());
// no, we shouldn't be able to add a SuperType to list
List<SubType> subList = list;
SubType item = subList.get(0);
// no, there's not necessarily only SubType items in list
*Java does have the notion of "use-site" variance, with wildcards (?). This will limit what methods are possible to call.
List<Type> list = ...
List<? super SubType> wildcardList = list;
wildcardList.add(new SubType());
// but...everything we get() is an Object
or
List<Type> list = ...
List<? extends SuperType> wildcardList = list;
SuperType item = wildcard.get(0);
// but...it's impossible to add()
FYI, some languages have the notion of definition-site variance, e.g. Scala. So List[Int] is indeed a subtype of List[Number]. That's possible with immutable collections (again, a limited set of methods), but obviously not for mutable ones.

Why can't you have multiple interfaces in a bounded wildcard generic?

I know there's all sorts of counter-intuitive properties of Java's generic types. Here's one in particular that I don't understand, and which I'm hoping someone can explain to me. When specifying a type parameter for a class or interface, you can bound it so that it must implement multiple interfaces with public class Foo<T extends InterfaceA & InterfaceB>. However, if you're instantiating an actual object, this doesn't work anymore. List<? extends InterfaceA> is fine, but List<? extends InterfaceA & InterfaceB> fails to compile. Consider the following complete snippet:
import java.util.List;
public class Test {
static interface A {
public int getSomething();
}
static interface B {
public int getSomethingElse();
}
static class AandB implements A, B {
public int getSomething() { return 1; }
public int getSomethingElse() { return 2; }
}
// Notice the multiple bounds here. This works.
static class AandBList<T extends A & B> {
List<T> list;
public List<T> getList() { return list; }
}
public static void main(String [] args) {
AandBList<AandB> foo = new AandBList<AandB>(); // This works fine!
foo.getList().add(new AandB());
List<? extends A> bar = new LinkedList<AandB>(); // This is fine too
// This last one fails to compile!
List<? extends A & B> foobar = new LinkedList<AandB>();
}
}
It seems the semantics of bar should be well-defined -- I can't think of any loss of type-safety by allowing an intersection of two types rather than just one. I'm sure there's an explanation though. Does anyone know what it is?
Interestingly, interface java.lang.reflect.WildcardType looks like it supports both upper bounds and lower bounds for a wildcard arg; and each can contain multiple bounds
Type[] getUpperBounds();
Type[] getLowerBounds();
This is way beyond what the language allows. There's a hidden comment in the source code
// one or many? Up to language spec; currently only one, but this API
// allows for generalization.
The author of the interface seems to consider that this is an accidental limitation.
The canned answer to your question is, generics is already too complicated as it is; adding more complexity might prove to be the last straw.
To allow a wildcard to have multiple upper bounds, one has to scan through the spec and make sure the entire system still works.
One trouble I know would be in the type inference. The current inference rules simply can't deal with intersection types. There's no rule to reduce a constraint A&B << C. If we reduced it to
A<<C
or
A<<B
any current inference engine has to go through major overhaul to allow such bifurcation. But the real serious problem is, this allows multiple solutions, but there's no justification to prefer one over another.
However, inference is not essential to type safety; we can simply refuse to infer in this case, and ask programmer to explicitly fill in type arguments. Therefore, difficulty in inference is not a strong argument against intercection types.
From the Java Language Specification:
4.9 Intersection Types
An intersection type takes the form T1 & ... & Tn, n>0, where Ti, 1in, are type expressions. Intersection types arise in the processes of capture conversion (§5.1.10) and type inference (§15.12.2.7). It is not possible to write an intersection type directly as part of a program; no syntax supports this. The values of an intersection type are those objects that are values of all of the types Ti, for 1in.
So why is this not supported? My guess is, what should you do with such a thing? - let's suppose it were possible:
List<? extends A & B> list = ...
Then what should
list.get(0);
return? There's no syntax to capture a return value of A & B. Adding something into such a list would not be possible either, so it's basically useless.
No problem... just declare the type you need in the method signature.
This compiles:
public static <T extends A & B> void main(String[] args) throws Exception
{
AandBList<AandB> foo = new AandBList<AandB>(); // This works fine!
foo.getList().add(new AandB());
List<? extends A> bar = new LinkedList<AandB>(); // This is fine too
List<T> foobar = new LinkedList<T>(); // This compiles!
}
Good question. It took me a while to figure out.
Lets simplify your case: You are trying to do the same as if you declare a class that extends 2 interfaces, and then a variable that has as a type those 2 interfaces, something like this:
class MyClass implements Int1, Int2 { }
Int1 & Int2 variable = new MyClass()
Of course, illegal.
And this is equivalent to what you try to do with generics.
What you are trying to do is:
List<? extends A & B> foobar;
But then, to use foobar, you would need to use a variable of both interfaces this way:
A & B element = foobar.get(0);
Which is not legal in Java. This means, you are declaring the elements of the list as beeing of 2 types simultaneously, and even if our brains can deal with it, Java language cannot.
For what it's worth: if anyone's wondering this because they would truly like to use this in practice, I've worked around it by defining an interface that contains the union of all methods in all the interfaces and class that I'm working with. i.e. I was trying to do the following:
class A {}
interface B {}
List<? extends A & B> list;
which is illegal - so instead I did this:
class A {
<A methods>
}
interface B {
<B methods>
}
interface C {
<A methods>
<B methods>
}
List<C> list;
This still isn't as useful as being able to type something as List<? extends A implements B>, e.g. if someone adds or removes methods to A or B, the typing of the list will not be updated automatically, it requires a manual change to C. But it's worked for my needs.

Bounding generics with 'super' keyword

Why can I use super only with wildcards and not with type parameters?
For example, in the Collection interface, why is the toArray method not written like this
interface Collection<T>{
<S super T> S[] toArray(S[] a);
}
super to bound a named type parameter (e.g. <S super T>) as opposed to a wildcard (e.g. <? super T>) is ILLEGAL simply because even if it's allowed, it wouldn't do what you'd hoped it would do, because since Object is the ultimate super of all reference types, and everything is an Object, in effect there is no bound.
In your specific example, since any array of reference type is an Object[] (by Java array covariance), it can therefore be used as an argument to <S super T> S[] toArray(S[] a) (if such bound is legal) at compile-time, and it wouldn't prevent ArrayStoreException at run-time.
What you're trying to propose is that given:
List<Integer> integerList;
and given this hypothetical super bound on toArray:
<S super T> S[] toArray(S[] a) // hypothetical! currently illegal in Java
the compiler should only allow the following to compile:
integerList.toArray(new Integer[0]) // works fine!
integerList.toArray(new Number[0]) // works fine!
integerList.toArray(new Object[0]) // works fine!
and no other array type arguments (since Integer only has those 3 types as super). That is, you're trying to prevent this from compiling:
integerList.toArray(new String[0]) // trying to prevent this from compiling
because, by your argument, String is not a super of Integer. However, Object is a super of Integer, and a String[] is an Object[], so the compiler still would let the above compile, even if hypothetically you can do <S super T>!
So the following would still compile (just as the way they are right now), and ArrayStoreException at run-time could not be prevented by any compile-time checking using generic type bounds:
integerList.toArray(new String[0]) // compiles fine!
// throws ArrayStoreException at run-time
Generics and arrays don't mix, and this is one of the many places where it shows.
A non-array example
Again, let's say that you have this generic method declaration:
<T super Integer> void add(T number) // hypothetical! currently illegal in Java
And you have these variable declarations:
Integer anInteger
Number aNumber
Object anObject
String aString
Your intention with <T super Integer> (if it's legal) is that it should allow add(anInteger), and add(aNumber), and of course add(anObject), but NOT add(aString). Well, String is an Object, so add(aString) would still compile anyway.
See also
Java Tutorials/Generics
Subtyping
More fun with wildcards
Related questions
On generics typing rules:
Any simple way to explain why I cannot do List<Animal> animals = new ArrayList<Dog>()?
java generics (not) covariance
What is a raw type and why shouldn’t we use it?
Explains how raw type List is different from List<Object> which is different from a List<?>
On using super and extends:
Java Generics: What is PECS?
From Effective Java 2nd Edition: "producer extends consumer super"
What is the difference between super and extends in Java Generics
What is the difference between <E extends Number> and <Number>?
How can I add to List<? extends Number> data structures? (YOU CAN'T!)
As no one has provided a satisfactory answer, the correct answer seems to be "for no good reason".
polygenelubricants provided a good overview of bad things happening with the java array covariance, which is a terrible feature by itself. Consider the following code fragment:
String[] strings = new String[1];
Object[] objects = strings;
objects[0] = 0;
This obviously wrong code compiles without resorting to any "super" construct, so array covariance should not be used as an argument.
Now, here I have a perfectly valid example of code requiring super in the named type parameter:
class Nullable<A> {
private A value;
// Does not compile!!
public <B super A> B withDefault(B defaultValue) {
return value == null ? defaultValue : value;
}
}
Potentially supporting some nice usage:
Nullable<Integer> intOrNull = ...;
Integer i = intOrNull.withDefault(8);
Number n = intOrNull.withDefault(3.5);
Object o = intOrNull.withDefault("What's so bad about a String here?");
The latter code fragment does not compile if I remove the B altogether, so B is indeed needed.
Note that the feature I'm trying to implement is easily obtained if I invert the order of type parameter declarations, thus changing the super constraint to extends. However, this is only possible if I rewrite the method as a static one:
// This one actually works and I use it.
public static <B, A extends B> B withDefault(Nullable<A> nullable, B defaultValue) { ... }
The point is that this Java language restriction is indeed restricting some otherwise possible useful features and may require ugly workarounds. I wonder what would happen if we needed withDefault to be virtual.
Now, to correlate with what polygenelubricants said, we use B here not to restrict the type of object passed as defaultValue (see the String used in the example), but rather to restrict the caller expectations about the object we return. As a simple rule, you use extends with the types you demand and super with the types you provide.
The "official" answer to your question can be found in a Sun/Oracle bug report.
BT2:EVALUATION
See
http://lampwww.epfl.ch/~odersky/ftp/local-ti.ps
particularly section 3 and the last paragraph on page 9. Admitting
type variables on both sides of subtype constraints can result in a
set of type equations with no single best solution; consequently,
type inference cannot be done using any of the existing standard
algorithms. That is why type variables have only "extends" bounds.
Wildcards, on the other hand, do not have to be inferred, so there
is no need for this constraint.
####.### 2004-05-25
Yes; the key point is that wildcards, even when captured, are only used
as inputs of the inference process; nothing with (only) a lower bound needs
to be inferred as a result.
####.### 2004-05-26
I see the problem. But I do not see how it is different from the problems
we have with lower bounds on wildcards during inference, e.g.:
List<? super Number> s;
boolean b;
...
s = b ? s : s;
Currently, we infer List<X> where X extends Object as the type of the
conditional expression, meaning that the assignment is illegal.
####.### 2004-05-26
Sadly, the conversation ends there. The paper to which the (now dead) link used to point is Inferred Type Instantiation for GJ. From glancing at the last page, it boils down to: If lower bounds are admitted, type inference may yield multiple solutions, none of which is principal.
The only reason is it makes no sense when declaring a type parameter with a super keyword when defining at a class level.
The only logical type-erasure strategy for Java would have been to fallback to the supertype of all objects, which is the Object class.
A great example and explanation can be found here:
http://www.angelikalanger.com/GenericsFAQ/FAQSections/TypeParameters.html#Why%20is%20there%20no%20lower%20bound%20for%20type%20parameters?
A simple example for rules of type-erasure can be found here:
https://www.tutorialspoint.com/java_generics/java_generics_type_erasure.htm#:~:text=Type%20erasure%20is%20a%20process,there%20is%20no%20runtime%20overhead.
Suppose we have:
basic classes A > B > C and D
class A{
void methodA(){}
};
class B extends A{
void methodB(){}
}
class C extends B{
void methodC(){}
}
class D {
void methodD(){}
}
job wrapper classes
interface Job<T> {
void exec(T t);
}
class JobOnA implements Job<A>{
#Override
public void exec(A a) {
a.methodA();
}
}
class JobOnB implements Job<B>{
#Override
public void exec(B b) {
b.methodB();
}
}
class JobOnC implements Job<C>{
#Override
public void exec(C c) {
c.methodC();
}
}
class JobOnD implements Job<D>{
#Override
public void exec(D d) {
d.methodD();
}
}
and one manager class with 4 different approaches to execute job on object
class Manager<T>{
final T t;
Manager(T t){
this.t=t;
}
public void execute1(Job<T> job){
job.exec(t);
}
public <U> void execute2(Job<U> job){
U u= (U) t; //not safe
job.exec(u);
}
public <U extends T> void execute3(Job<U> job){
U u= (U) t; //not safe
job.exec(u);
}
//desired feature, not compiled for now
public <U super T> void execute4(Job<U> job){
U u= (U) t; //safe
job.exec(u);
}
}
with usage
void usage(){
B b = new B();
Manager<B> managerB = new Manager<>(b);
//TOO STRICT
managerB.execute1(new JobOnA());
managerB.execute1(new JobOnB()); //compiled
managerB.execute1(new JobOnC());
managerB.execute1(new JobOnD());
//TOO MUCH FREEDOM
managerB.execute2(new JobOnA()); //compiled
managerB.execute2(new JobOnB()); //compiled
managerB.execute2(new JobOnC()); //compiled !!
managerB.execute2(new JobOnD()); //compiled !!
//NOT ADEQUATE RESTRICTIONS
managerB.execute3(new JobOnA());
managerB.execute3(new JobOnB()); //compiled
managerB.execute3(new JobOnC()); //compiled !!
managerB.execute3(new JobOnD());
//SHOULD BE
managerB.execute4(new JobOnA()); //compiled
managerB.execute4(new JobOnB()); //compiled
managerB.execute4(new JobOnC());
managerB.execute4(new JobOnD());
}
Any suggestions how to implement execute4 now ?
==========edited =======
public void execute4(Job<? super T> job){
job.exec( t);
}
Thanks to all :)
========== edited ==========
private <U> void execute2(Job<U> job){
U u= (U) t; //now it's safe
job.exec(u);
}
public void execute4(Job<? super T> job){
execute2(job);
}
much better, any code with U inside execute2
super type U becomes named !
interesting discussion :)
I really like the accepted answer, but I would like to put a slightly different perspective on it.
super is supported in a typed parameter only to allow contravariance capabilities. When it comes to covariance and contravariance it's important to understand that Java only supports use-site variance. Unlike Kotlin or Scala, which allow declaration-site variance. Kotlin documentation explains it very well here. Or if you're more into Scala, here's one for you.
It basically means that in Java, you can not limit the way you're gonna use your class when you declare it in terms of PECS. The class can both consume and produce, and some of its methods can do it at the same time, like toArray([]), by the way.
Now, the reason extends is allowed in classes and methods declarations is because it's more about polymorphism than it is about variance. And polymorphism is an intrinsic part of Java and OOP in general: If a method can accept some supertype, a subtype can always safely be passed to it. And if a method, at declaration site as it's "contract", should return some supertype, it's totally fine if it returns a subtype instead in its implementations

Categories