Reflection Casting and Overloaded Method Dispatching in Java - java

Note that all the code is a simplified example in order to only communicate the core ideas of my question. It should all compile and run though, after slight editing.
I have several classes which all implement a common interface.
public interface Inter{}
public class Inter1 implements Inter{}
public class Inter2 implements Inter{}
In a separate class I have a list of type Inter, which I use to store and remove Inter1 and Inter2 types, based on user input.
java.util.ArrayList<Inter> inters = new java.util.ArrayList<Inter>();
I also have a family of overloaded methods, which deal with how each implementation interacts with each other, along with a default implementation for 2 "Inter"s.
void doSomething(Inter in1, Inter in2){
System.out.println("Inter/Inter");
}
void doSomething(Inter1 in1, Inter1 in2){
System.out.println("Inter1/Inter11");
}
void doSomething(Inter2 in1, Inter1 in2){
System.out.println("Inter2/Inter1");
}
The methods are periodically called like so:
for(int i = 0; i < inters.size() - 1; i++){
for(int o = i+1; o < inters.size(); o++){
Inter in1 = inters.get(i); Inter in2 = inters.get(o);
doSomething(in1.getClass().cast(in1), in2.getClass().cast(in2));
System.out.println("Class 1: " + in1.getClass().getName());
System.out.println("Class 2: " + in2.getClass().getName());
}
}
An example output from this is:
Inter/Inter
Class 1: Inter
Class 2: Inter
Inter/Inter
Class 1: Inter
Class 2: Inter1
Inter/Inter
Class 1: Inter1
Class 2: Inter1
Looking at the output, it is clear that doSomething(Inter in1, Inter in2) is called, even in cases when other methods should be called. Interestingly, the class names outputted are the correct ones.
Why does java have static method overloading when the class types are determined at runtime using reflection?
Is there any way to get Java to do this? I know I can use reflection and Class.getMethod() and method.invoke() to get the results I want, but it would be so much neater to do so with casting.
I realize that questions about similar concepts have been asked before, but while all of the answers were informative, none satisfied me.
Double dispatch looked like it would work, but that would mean reworking a lot of code, since I use this type of thing often.

It looks to me like we're talking about what's going on with:
doSomething(in1.getClass().cast(in1), in2.getClass().cast(in2));
Based on your surprise that the type that is being output is always Inter, it seems you're a little confused on what's going on here. In particular, you seem to think that in1.getClass().cast(in1) and in2.getClass().cast(in2) should be forcing a different overload because of their differing runtime type. However, this is wrong.
Method overload resolution happens statically. This means that it happens based on the declared types of the two arguments to the method. Since both in1 and in2 are both declared as Inter, the method chosen is obviously void doSomething(Inter in1, Inter in2).
The takeaway here is that in1 is declared as an Inter. This means that in1.getClass() is essentially the same as Inter.class for the purposes of static analysis -- getClass simply returns a Class<? extends Inter>. Therefore, the casts are useless, and you're only ever going to get the first overload.

The Java Language Specification (JLS) in section 15.12 Method Invocation Expression explains in detail the process that the compiler follows to choose the right method to invoke.
There, you will notice that this is a compile-time task. The JLS says in subsection 15.12.2:
This step uses the name of the method and the types of the argument expressions
to locate methods that are both accessible and applicable
There may be more than one such method, in which case the most specific one is chosen.
In your case, this means that since you are passing two objects of type Integer, the most specific method is the one that receives exactly that.
To verify the compile-time nature of this, you can do the following test.
Declare a class like this and compile it.
public class ChooseMethod {
public void doSomething(Number n){
System.out.println("Number");
}
}
Declare a second class that invokes a method of the first one and compile it.
public class MethodChooser {
public static void main(String[] args) {
ChooseMethod m = new ChooseMethod();
m.doSomething(10);
}
}
If you invoke the main, the output says Number.
Now, add a second more specific method to the ChooseMethod class, and recompile it (but do not recompile the other class).
public void doSomething(Integer i) {
System.out.println("Integer");
}
If you run the main again, the output is still Number.
Basically, because it was decided at compile time. If you recompile the MethodChooser class (the one with the main), and run the program again, the output will be Integer.
As such, if you want to force the selection of one of the overloaded methods, the type of the arguments must correspond with the type of the parameters at compile time, and not only at run time as you seem to expect in this exercise.

Related

Why is this Java method call considered ambiguous?

I've come across a strange error message that I believe may be incorrect. Consider the following code:
public class Overloaded {
public interface Supplier {
int get();
}
public interface Processor {
String process(String s);
}
public static void load(Supplier s) {}
public static void load(Processor p) {}
public static int genuinelyAmbiguous() { return 4; }
public static String genuinelyAmbiguous(String s) { return "string"; }
public static int notAmbiguous() { return 4; }
public static String notAmbiguous(int x, int y) { return "string"; }
public static int strangelyAmbiguous() { return 4; }
public static String strangelyAmbiguous(int x) { return "string"; }
}
If I have a method that looks like this:
// Exhibit A
public static void exhibitA() {
// Genuinely ambiguous: either choice is correct
load(Overloaded::genuinelyAmbiguous); // <-- ERROR
Supplier s1 = Overloaded::genuinelyAmbiguous;
Processor p1 = Overloaded::genuinelyAmbiguous;
}
The error we get makes perfect sense; the parameter to load() can be assigned to either, so we get an error that states the method call is ambiguous.
Conversely, if I have a method that looks like this:
// Exhibit B
public static void exhibitB() {
// Correctly infers the right overloaded method
load(Overloaded::notAmbiguous);
Supplier s2 = Overloaded::notAmbiguous;
Processor p2 = Overloaded::notAmbiguous; // <-- ERROR
}
The call to load() is fine, and as expected, I cannot assign the method reference to both Supplier and Processor because it is not ambiguous: Overloaded::notAmbiguous cannot be assigned to p2.
And now the weird one. If I have a method like this:
// Exhibit C
public static void exhibitC() {
// Complains that the reference is ambiguous
load(Overloaded::strangelyAmbiguous); // <-- ERROR
Supplier s3 = Overloaded::strangelyAmbiguous;
Processor p3 = Overloaded::strangelyAmbiguous; // <-- ERROR
}
The compiler complains that the call to load() is ambiguous (error: reference to load is ambiguous), but unlike Exhibit A, I cannot assign the method reference to both Supplier and Processor. If it were truly ambiguous, I feel I should be able to assign s3 and p3 to both overloaded parameter types just as in Exhibit A, but I get an error on p3 stating that error: incompatible types: invalid method reference. This second error in Exhibit C makes sense, Overloaded::strangelyAmbiguous isn't assignable to Processor, but if it isn't assignable, why is it still considered ambiguous?
It would seem that the method reference inference only looks at the arity of the FunctionalInterface when determining which overloaded version to select. In the variable assignment, arity and type of parameters are checked, which causes this discrepancy between the overloaded method and the variable assignment.
This seems to me like a bug. If it isn't, at least the error message is incorrect, since there is arguably no ambiguity when between two choices only one is correct.
Your question is very similar to this one.
The short answer is:
Overloaded::genuinelyAmbiguous;
Overloaded::notAmbiguous;
Overloaded::strangelyAmbiguous;
all these method references are inexact (they have multiple overloads). Consequently, according to the JLS ยง15.12.2.2., they are skipped from the applicability check during overload resolution, which results in ambiguity.
In this case, you need to specify the type explicitly, for example:
load((Processor) Overloaded::genuinelyAmbiguous);
load(( Supplier) Overloaded::strangelyAmbiguous);
Method references and overloading, just... don't. Theoretically, you are more than correct - this should be fairly easy for a compiler to deduce, but let's not confuse humans and compilers.
The compiler sees a call to load and says : "hey, I need to call that method. Cool, can I? Well there are 2 of them. Sure, let's match the argument". Well the argument is a method reference to an overloaded method. So the compiler is getting really confused here, it basically says that : "if I could tell which method reference you are pointing to, I could call load, but, if I could tell which load method you want to call, I could infer the correct strangelyAmbiguous", thus it just goes in circles, chasing it's tale. This made up decision in a compilers "mind" is the simplest way I could think to explain it. This brings a golden bad practice - method overloading and method references are a bad idea.
But, you might say - ARITY! The number of arguments is the very first thing a compiler does (probably) when deciding if this is an overload or not, exactly your point about:
Processor p = Overloaded::strangelyAmbiguous;
And for this simple case, the compiler could indeed infer the correct methods, I mean we, humans can, should be a no brainer for a compiler. The problem here is that this is a simple case with just 2 methods, what about 100*100 choices? The designers had to either allow something (let' say up to 5*5 and allow resolution like this) or ban that entirely - I guess you know the path they took. It should be obvious why this would work if you would have used a lambda - arity is right there, explicit.
About the error message, this would not be anything new, if you play enough with lambdas and method references, you will start to hate the error message : "a non static method cannot be referenced from a static context", when there is literally nothing to do with that. IIRC these error messages have improved from java-8 and up, you never know if this error message would improve also in java-15, let's say.

Java Compile time non polymorphism

In the code below I get a compiler error at b.printname();. As I understand it the error is to do with the fact that the compiler is effectively operating in a non polymorphic way( i.e. the compiler is essentially only choosing to look at the left side of the operand and therefore b is a Question). Since b is of type Question and since Question does not have a no-args printName method you get a compilation error. Is that correct?
Now assuming that is correct, my question is why? Surely the compiler should know that Question b is referring to an object that does in fact support the no-args printName method? E.g. if you look at how the compiler behaves in terms of casting there are examples where the compiler, for lack of a better word, acts polymorphicly or to put it a different way the compiler knows what's going on in terms of the right hand side of the operand and acts upon that knowledge. An example would be if an interface type refers to an object that implements the interface, then the compiler looks at the right hand side of the statement (i.e. the object that implements the interface) and decides no cast is required. So why doesn't the compiler act that way here, why doesn't it look and see that the object in question is actually a Blue and that a Blue does indeed support the no-arg method printName?
public class Polymorf3 {
public static void main(String[] args){
Polymorf3 me = new Polymorf3();
me.doStuff();
}
public void doStuff() {
Bat a = new Bat();
Question b = new Blue();
//a.printName();
a.printName(a.name);
b.printName(); // Compiler Error:Required String Found no args
}
abstract class Question {
String name="Question_name";
public void printName(String name){ System.out.println(name);}
}
class Bat extends Question {
String name = "Bat_Bruce";
//public void printName(){ System.out.println(name);}
}
class Blue extends Question {
String name = "Clark";
public void printName() {System.out.println(name);}
}
}
Though b is of type Blue, since you declared it as Question b = new Blue();, the compiler treats it as type Question, and thus that's the only interface available to it without an explicit cast:
((Blue)b).printName();
Alternatively, you can declare it as Blue b = new Blue(); and b.printName(); will not throw a compile time error.
Essentially what's happening here is that you're declaring your new variable b at a higher level of abstraction, so the only printName method available to b is the one in the higher level of abstraction, the one with the args.
Edit:
OP asked why the compiler treats b as a Question even though it's initialized as Blue. Consider the following:
Question q = new Blue();
// ... some other code...
q = new Bat(); // Valid!!
q.printName("some string");
Now consider that tomorrow, some other developer comes in and changes it to the following:
Blue q = new Blue();
// ... some other code...
q = new Bat(); // Invalid!! Compiler error
q.printName("some string");
Declaring a variable at the highest level of abstraction required for your operation means you can later change the implementation more easily and without affecting all the rest of your code. Thus, it should be clear why the Java compiler is treating b as a Question. It's because b can, at any time, become an instance of Blue or Bat, so treating it as the implementation (Blue or Bat) would violate the contract of the Question interface by allowing some other non-arg getName method.
You seem to have misunderstood what polymorphism means. It means that you can treat an instance of the derived class as if it was an instance of the base class. That includes not calling methods on it that the base class doesn't provide. The variable type informs what methods you can call, and the instantiation type determines what implementations of those methods are run.
By putting your Blue instance in a Question variable, you are asking to treat it like a Question. If you wanted to call methods on your Question variable that are not provided by the Question class, then why have it be a Question variable at all? If you could call derived-class methods on a base class variable, it would not be a base class variable.

Overloading is compile-time polymorphism. Really?

I do know the syntactical difference between overriding and overloading. And I also know that overriding is run-time polymorphism and overloading is compile-time polymorphism. But my question is: "Is overloading is really compile-time polymorphism? Is the method call really solving at compile time?". To clarify my point, let's consider an example class.
public class Greeter {
public void greetMe() {
System.out.println("Hello");
}
public void greetMe(String name) {
System.out.println("Hello " + name);
}
public void wishLuck() {
System.out.println("Good Luck");
}
}
Since all of the methods greetMe(), greetMe(String name), wishLuck() are public, they all can be overriden(including overloaded one), right? For example,
public class FancyGreeter extends Greeter {
public void greetMe() {
System.out.println("***********");
System.out.println("* Hello *");
System.out.println("***********");
}
}
Now, consider the following snippet:
Greeter greeter = GreeterFactory.getRandomGreeter();
greeter.greetMe();
The getRandomGreeter() method returns a random Greeter object. It may either return an object of Greeter, or any of its subclasses, like FancyGreeter or GraphicalGreeter or any other one. The getRandomGreeter() will create the objects either using new or dynamically load the class file and create object using reflection(I think it is possible with reflection) or any other way that is possible. All of these methods of Greeter may or may not be overriden in subclasses. So the compiler has no way to know whether a particular method(overloaded or not) is overriden. Right? Also, wikipedia says on Virtual functions:
In Java, all non-static methods are by default "virtual functions".
Only methods marked with the keyword final, which cannot be overridden,
along with private methods, which are not inherited, are non-virtual.
Since, virtual functions are resolved at run-time using dynamic method dispatch, and since all non private, non final methods are virtual(whether overloaded or not), they must be resolved at run-time. Right?
Then, How can overloading still be resolved at compile-time? Or, is there anything that I misunderstood, or am I missing?
Every 'Greeter' class has 3 virtual methods: void greetMe(), void greetMe(String), and void wishLuck().
When you call greeter.greetMe() the compiler can work out which one of the three virtual methods should be called from the method signature - ie. the void greetMe() one since it accepts no arguments. Which specific implementation of the void greetMe() method is called depends on the type of the greeter instance, and is resolved at run-time.
In your example it's trivial for the compiler to work out which method to call, since the method signatures are all completely different. A slightly better example for showing the 'compile time polymorphism' concept might be as follows:
class Greeter {
public void greetMe(Object obj) {
System.out.println("Hello Object!");
}
public void greetMe(String str) {
System.out.println("Hello String!");
}
}
Using this greeter class will give the following results:
Object obj = new Object();
String str = "blah";
Object strAsObj = str;
greeter.greetMe(obj); // prints "Hello Object!"
greeter.greetMe(str); // prints "Hello String!"
greeter.greetMe(strAsObj); // prints "Hello Object!"
The compiler will pick out the method with the most specific match using the compile-time type, which is why the 2nd example works and calls the void greetMe(String) method.
The last call is the most interesting one: Even though the run-time type of strAsObj is String, it has been cast as an Object so that's how the compiler sees it. So, the closest match the compiler can find for that call is the void greetMe(Object) method.
Overloaded methods can still be overridden, if that is what you ask.
Overloaded methods are like different families, even though they share the same name. The compiler statically chooses one family given the signature, and then at run time it is dispatched to the most specific method in the class hierarchy.
That is, method dispatching is performed in two steps:
The first one is done at compile time with the static information available, the compiler will emit a call for the signature that matches best your current method parameters among the list of overloaded methods in the declared type of the object the method is invoked upon.
The second step is performed at run time, given the method signature that should be called (previous step, remember?), the JVM will dispatch it to the most concrete overridden version in the actual type of receiver object.
If the method arguments types are not covariant at all, overloading is equivalent to having methods names mangled at compile time; because they are effectively different methods, the JVM won't never ever dispatch them interchangeably depending on the type of the receiver.
What is polymorphism?
Acc. to me: if an entity can be represented in more than one forms, that entity is said to exhibit polymorphism.
Now, lets apply this definition to Java constructs:
1) Operator overloading is compile time polymorphism.
For example, + operator can be used to add two numbers OR to concatenate two strings. it's an example of polymorphism strictly saying compile-time polymorphism.
2) Method overloading is compile time polymorphism.
For example, a method with same name can have more than one implemntations. it's also a compile-time polymorphism.
It's compile-time because before execution of program compiler decides the flow of program i.e which form will be used during run-time.
3) Method overriding is run-time polymorphism.
For example, a method with same signature can have more than one implemenations. it's a run time polymorphism.
4) Base class use in place of derived class is run time polymorphism.
For example, an interface reference can point to any of it's implementor.
It's run-time because the flow of program can't be known before execution i.e. only during run-time it can be decided that which form will be used.
I hope it clears a bit.
Overloading in this respect means that the type of the function is statically determined at compile time as opposed to dynamic dispatch.
What really happens behind the scenes is that for a method named "foo" with types "A" and "B" two methods are created ("foo_A" and "foo_B"). Which of them is to be called is determined at compile-time (foo((A) object) or foo((B) object) result in foo_A being called or foo_B). So in a way this is compile-time polymorphism, although the real method (i.e. which implementation in the class hierarchy to take) is determined at runtime.
I have strong objection to call method overloading as compile time polymorphism.
I agree that method overloading is static binding(compile time) but i didn't see polymorphism in that.
I tried to put my opinion in my question to get clarification. you can refer this link.

Final arguments in interface methods - what's the point?

In Java, it is perfectly legal to define final arguments in interface methods and do not obey that in the implementing class, e.g.:
public interface Foo {
public void foo(int bar, final int baz);
}
public class FooImpl implements Foo {
#Override
public void foo(final int bar, int baz) {
...
}
}
In the above example, bar and baz has the opposite final definitions in the class VS the interface.
In the same fashion, no final restrictions are enforced when one class method extends another, either abstract or not.
While final has some practical value inside the class method body, is there any point specifying final for interface method parameters?
It doesn't seem like there's any point to it. According to the Java Language Specification 4.12.4:
Declaring a variable final can serve
as useful documentation that its value
will not change and can help avoid
programming errors.
However, a final modifier on a method parameter is not mentioned in the rules for matching signatures of overridden methods, and it has no effect on the caller, only within the body of an implementation. Also, as noted by Robin in a comment, the final modifier on a method parameter has no effect on the generated byte code. (This is not true for other uses of final.)
Some IDEs will copy the signature of the abstract/interface method when inserting an implementing method in a sub class.
I don't believe it makes any difference to the compiler.
EDIT: While I believe this was true in the past, I don't think current IDEs do this any more.
Final annotations of method parameters are always only relevant to the method implementation never to the caller. Therefore, there is no real reason to use them in interface method signatures. Unless you want to follow the same consistent coding standard, which requires final method parameters, in all method signatures. Then it is nice to be able to do so.
Update: Original answer below was written without fully understanding the question, and therefore does not directly address the question :) Nevertheless, it must be informative for those looking to understand the general use of final keyword.
As for the question, I would like to quote my own comment from below.
I believe you're not forced to implement the finality of an argument to leave you free to decide whether it should be final or not in your own implementation.
But yes, it sounds rather odd that you can declare it final in the interface, but have it non-final in the implementation. It would have made more sense if either:
a. final keyword was not allowed for interface (abstract) method arguments (but you can use it in implementation), or
b. declaring an argument as final in interface would force it to be declared final in implementation (but not forced for non-finals).
I can think of two reasons why a method signature can have final parameters: Beans and Objects (Actually, they are both the same reason, but slightly different contexts.)
Objects:
public static void main(String[] args) {
StringBuilder cookingPot = new StringBuilder("Water ");
addVegetables(cookingPot);
addChicken(cookingPot);
System.out.println(cookingPot.toString());
// ^--- OUTPUT IS: Water Carrot Broccoli Chicken ChickenBroth
// We forgot to add cauliflower. It went into the wrong pot.
}
private static void addVegetables(StringBuilder cookingPot) {
cookingPot.append("Carrot ");
cookingPot.append("Broccoli ");
cookingPot = new StringBuilder(cookingPot.toString());
// ^--- Assignment allowed...
cookingPot.append("Cauliflower ");
}
private static void addChicken(final StringBuilder cookingPot) {
cookingPot.append("Chicken ");
//cookingPot = new StringBuilder(cookingPot.toString());
// ^---- COMPILATION ERROR! It is final.
cookingPot.append("ChickenBroth ");
}
The final keyword ensured that we will not accidentally create a new local cooking pot by showing a compilation error when we attempted to do so. This ensured the chicken broth is added to our original cooking pot which the addChicken method got. Compare this to addVegetables where we lost the cauliflower because it added that to a new local cooking pot instead of the original pot it got.
Beans:
It is the same concept as objects (as shown above). Beans are essentially Objects in Java. However, beans (JavaBeans) are used in various applications as a convenient way to store and pass around a defined collection of related data. Just as the addVegetables could mess up the cooking process by creating a new cooking pot StringBuilder and throwing it away with the cauliflower, it could also do the same with a cooking pot JavaBean.
I believe it may be a superfluous detail, as whether it's final or not is an implementation detail.
(Sort of like declaring methods/members in an interface as public.)

Method/Constructor Overloading with Super/Sub types

I have some questions as to which overloaded method would be called in certain cases.
Case 1:
public void someMethod(Object obj){
System.out.println("Object");
}
public void someMethod(InputStream is){
System.out.println("InputStream");
}
public void someMethod(FilterInputStream fis){
System.out.println("FilterInputStream");
}
I know that if I pass it a String it will print "Object". However, what if I pass it an InputStream? It gets more confusing if I pass it something such as BufferedInputStream. Will this call the Object one, the InputStream one, or the FilterInputStream one? Does the order that the methods appear matter?
Case 2:
This is a little more tricky, because it takes advantage of multiple interface inheritance. Neither BlockingQueue and Deque are sub/supertypes of each other, but both are supertypes of BlockingDeque. Sun added multiple inheritance with interfaces because they don't need a tree structure. The declaration for BlockingDeque is
public interface BlockingDeque extends BlockingQueue, Deque {.
public void someMethod(BlockingQueue bq){
System.out.println("BlockingQueue");
}
public void someMethod(Deque bq){
System.out.println("Deque");
}
public void someCaller(){
BlockingDeque bd = new LinkedBlockingDeque();
someMethod(bd);
}
Will this Call someMethod(BlockingQueue) or someMethod(Deque)?
Case 3:
You can combine these two with this:
public void someMethod(Queue q){
//...
}
public void someMethod(Deque q){
//...
}
public void someMethod(List p){
//...
}
public void someCaller(){
someMethod(new LinkedList());
}
Same question: someMethod(Queue), someMethod(Deque), or someMethod(List)?
Case 4:
You can make things very complicated too, by introducting two arguments:
public void someMethod(Collection c1, List c2){
//...
}
public void someMethod(List c1, Collection c2){
//...
}
public void someCaller(){
someMethod(new ArrayList(), new ArrayList());
}
Will this call someMethod(Collection, List) or vice versa?
Case 5:
It gets worse when they have different return types:
public Class<?> someMethod(BlockingQueue bq){
return BlockingQueue.class;
}
public String someMethod(Deque bq){
return "Deque";
}
public void someCaller(){
BlockingDeque bd = new LinkedBlockingDeque();
System.out.println(someMethod(bd));
}
These can get pretty bad. What will someCaller print in this case? someMethod(BlockingQueue).toString(), or someMethod(Deque)?
In general, Java will invoke the narrowest non-ambiguous definition, so for the first few cases if you pass a narrow type it will invoke the narrowest function, if you pass a wider type (say InputStream) you get the wider type's function (in case 1 for InputStream that's method 2). Here's a simple test, and note that downcasting will widen the type, and call the wider type's method.
The core issue is whether Java can resolve a unique function for calling. So that means if you provide a definition that has multiple matches, you need to either match the highest known type, or uniquely match a wider type without also matching the higher type. Basically: if you match multiple functions, one of them needs to be higher in hierarchy for Java to resolve the difference, otherwise the calling convention is definitively ambiguous.
Java seems to throw a compilation error when the method signatures are ambiguous. In my view Case 4 is canonically the worst example of this, so I wrote a quick test and did in fact get the expected compilation error, complaining of an ambiguous match for functions to invoke.
Case 5 doesn't make anything better or worse: Java doesn't use return type to disambiguate which method to call, so it won't help you -- and since the definitions are already ambiguous you're still going to end up with a compilation error.
So the quick summary:
Compilation error due to ambiguous call when invoked with a plain InputStream, called with FilteredInputStream uses 3rd def, called with something that implements InputStream but isn't a FilteredInputStream uses 2nd def, anything else, 1st def
2nd def
ambiguous, will cause a compilation error
ambiguous, will cause a compilation error
ambiguous, will cause a compilation error
Finally, if you have doubts that you're calling the definition you think you should be, you should consider changing your code to remove the ambiguity or work to specify the right type argument(s) to call the "right" function. Java will tell you when it can't make a smart decision (when things are truly ambiguous), but the best way to avoid any of these problems is through consistent and unambiguous implementations. Don't do weird stuff, like case 4, and you won't run into weird problems.
In the case of overloaded functions, the method called will be the one which has the most restricted but compatible argument type in reference to the object being passed. Also something to note is that binding of overloaded method is decided at the compile time and not by the type of object determined at the runtime.e.g.
Case 1: If input is of type InputStream at compile time, then 2nd method will be called. BufferedInputStream will go into 2nd method.
Case 2: This fails at compile time because BlockingDeque type of reference is ambiguous and the argument could fit in any of the two methods as it extends both these types
case 3: No problem here, 3rd method because Linkedlist is not compatible with any of the two other arguments
Case 4: Ambiguous because with these arguments I can get into any of those two methods and there is no way to discern
Case 5: Return types have no role to play in overloaded methods. Case 2 holds.
This is a bit tangential to the question about the overloaded arguments, but there is a pretty crisp reason why case 5 is "worse".
Case 5 is where you're using the language feature called co-variant return types. This wasn't originally present in Java but was added in v1.5 I believe (partially because of this problem). If the compiler cannot figure out what the proper return type is it was fail and that is what happens in this case.

Categories