In the code below I get a compiler error at b.printname();. As I understand it the error is to do with the fact that the compiler is effectively operating in a non polymorphic way( i.e. the compiler is essentially only choosing to look at the left side of the operand and therefore b is a Question). Since b is of type Question and since Question does not have a no-args printName method you get a compilation error. Is that correct?
Now assuming that is correct, my question is why? Surely the compiler should know that Question b is referring to an object that does in fact support the no-args printName method? E.g. if you look at how the compiler behaves in terms of casting there are examples where the compiler, for lack of a better word, acts polymorphicly or to put it a different way the compiler knows what's going on in terms of the right hand side of the operand and acts upon that knowledge. An example would be if an interface type refers to an object that implements the interface, then the compiler looks at the right hand side of the statement (i.e. the object that implements the interface) and decides no cast is required. So why doesn't the compiler act that way here, why doesn't it look and see that the object in question is actually a Blue and that a Blue does indeed support the no-arg method printName?
public class Polymorf3 {
public static void main(String[] args){
Polymorf3 me = new Polymorf3();
me.doStuff();
}
public void doStuff() {
Bat a = new Bat();
Question b = new Blue();
//a.printName();
a.printName(a.name);
b.printName(); // Compiler Error:Required String Found no args
}
abstract class Question {
String name="Question_name";
public void printName(String name){ System.out.println(name);}
}
class Bat extends Question {
String name = "Bat_Bruce";
//public void printName(){ System.out.println(name);}
}
class Blue extends Question {
String name = "Clark";
public void printName() {System.out.println(name);}
}
}
Though b is of type Blue, since you declared it as Question b = new Blue();, the compiler treats it as type Question, and thus that's the only interface available to it without an explicit cast:
((Blue)b).printName();
Alternatively, you can declare it as Blue b = new Blue(); and b.printName(); will not throw a compile time error.
Essentially what's happening here is that you're declaring your new variable b at a higher level of abstraction, so the only printName method available to b is the one in the higher level of abstraction, the one with the args.
Edit:
OP asked why the compiler treats b as a Question even though it's initialized as Blue. Consider the following:
Question q = new Blue();
// ... some other code...
q = new Bat(); // Valid!!
q.printName("some string");
Now consider that tomorrow, some other developer comes in and changes it to the following:
Blue q = new Blue();
// ... some other code...
q = new Bat(); // Invalid!! Compiler error
q.printName("some string");
Declaring a variable at the highest level of abstraction required for your operation means you can later change the implementation more easily and without affecting all the rest of your code. Thus, it should be clear why the Java compiler is treating b as a Question. It's because b can, at any time, become an instance of Blue or Bat, so treating it as the implementation (Blue or Bat) would violate the contract of the Question interface by allowing some other non-arg getName method.
You seem to have misunderstood what polymorphism means. It means that you can treat an instance of the derived class as if it was an instance of the base class. That includes not calling methods on it that the base class doesn't provide. The variable type informs what methods you can call, and the instantiation type determines what implementations of those methods are run.
By putting your Blue instance in a Question variable, you are asking to treat it like a Question. If you wanted to call methods on your Question variable that are not provided by the Question class, then why have it be a Question variable at all? If you could call derived-class methods on a base class variable, it would not be a base class variable.
Related
In java, can we pass superclass Object to subclass reference ?
I know that it is a weird question/practically not viable,
but I want to understand the logic behind this
Why is it not allowed in java.
class Employee {
public void met1(){
System.out.println("met1");
}
}
class SalesPerson extends Employee
{
#Override
public void met1(){
System.out.println("new met1");
}
public void met2(){
System.out.println("met2");
}
}
public class ReferenceTest {
public static void main(String[] args) {
SalesPerson sales = new Employee(); // line 1
sales.met1(); // line 2
sales.met2(); // line 3
}
}
What would have happened if Java allowed compilation of line 1?
Where would the problem arise?
Any inputs/link are welcomes.
If your SalesPerson sales = new Employee(); statement was allowed to compile, this would have broken the principles of Polymorphism, which is one of the features that the language has.
Also, you should get familiar with that does compile time type and runtime type mean:
The compile-time type of a variable is the type it is declared as, while the runtime type is the type of the actual object the variable points to. For example:
Employee sales = new SalesPerson();
The compile-time type of sales is Employee, and the runtime type will be SalesPerson.
The compile-time type defines which methods can be called, while the runtime type defines what happens during the actual call.
Let's suppose for a moment that this statement was valid:
SalesPerson sales = new Employee();
As I said, the compile-time type defines which methods can be called, so met2() would have been eligible for calling. Meanwhile, the Employee class doesn't have a met2() and so the actual call would have been impossible.
No. It makes zero sense to allow that.
The reason is because subclasses generally define additional behavior. If you could assign a superclass object to a subclass reference, you would run into problems at runtime when you try to access class members that don't actually exist.
For example, if this were allowed:
String s = new Object();
You would run into some pretty bad problems. What happens if you try to call a String method? Would the runtime crash? Or perhaps a no-op would be performed? Should this even compile?
If the runtime were to crash, you could use runtime checks to make sure the objects you receive will actually contain the methods you want. But then you're basically implementing guarantees that the Java type system already provides at compile-time. So really that "feature" cost you nothing but a bunch of type-checking code that you shouldn't have had to write in the first place.
If no-ops were executed instead of nonexistent methods, it would be extremely difficult to ensure that your programs would run as written when the members you want to access don't exist, as any reference could really be an Object at any point. This might be easy to handle when you are working on your own and control all your code, but when you have to deal with other code those guarantees essentially vanish.
If you want the compiler to do the checking, assuming compiler writers don't hunt you down and give you a stern talking-to -- well, you're back to "normal" behavior once more. So again, it's just a lot of work for zero benefit.
Long story short: No, it's not allowed, because it makes zero sense to do so, and if a language designer tried to allow that they would be locked up before they could do any more harm.
If you inherit from a class, you always specialize the common behavior of the super class.
In your example, the SalesPerson is a special Employee. It inherits all behavior from the super class and can override behavior to make it different or add new behavior.
If you, as it is allowed, initialize a variable of the super type with an instance of the sub type like Employee e = new SalesPerson(), then you can use all common behavior on that variable.
If instead, you were possible to do the other way round, there might be several uninitialized members in the class.
You find this very often when using the Java Collection API, where for example you can use the common List class on operations like iterating through it, but when initializing it, you use for example the sub class ArrayList.
I know this question has been asked a lot, but the usual answers are far from satisfying in my view.
given the following class hierarchy:
class SuperClass{}
class SubClass extends SuperClass{}
why does people use this pattern to instantiate SubClass:
SuperClass instance = new SubClass();
instead of this one:
SubClass instance = new SubClass();
Now, the usual answer I see is that this is in order to send instance as an argument to a method that requires an instance of SuperClass like here:
void aFunction(SuperClass param){}
//somewhere else in the code...
...
aFunction(instance);
...
But I can send an instance of SubClass to aFunction regardless of the type of variable that held it! meaning the following code will compile and run with no errors (assuming the previously provided definition of aFunction):
SubClass instance = new SubClass();
aFunction(instance);
In fact, AFAIK variable types are meaningless at runtime. They are used only by the compiler!
Another possible reason to define a variable as SuperClass would be if it had several different subclasses and the variable is supposed to switch it's reference to several of them at runtime, but I for example only saw this happen in class (not super, not sub. just class). Definitly not sufficient to require a general pattern...
The main argument for this type of coding is because of the Liskov Substituion Principle, which states that if X is a subtype of type T, then any instance of T should be able to be swapped out with X.
The advantage of this is simple. Let's say we've got a program that has a properties file, that looks like this:
mode="Run"
And your program looks like this:
public void Program
{
public Mode mode;
public static void main(String[] args)
{
mode = Config.getMode();
mode.run();
}
}
So briefly, this program is going to use the config file to define the mode this program is going to boot up in. In the Config class, getMode() might look like this:
public Mode getMode()
{
String type = getProperty("mode"); // Now equals "Run" in our example.
switch(type)
{
case "Run": return new RunMode();
case "Halt": return new HaltMode();
}
}
Why this wouldn't work otherwise
Now, because you have a reference of type Mode, you can completely change the functionality of your program with simply changing the value of the mode property. If you had public RunMode mode, you would not be able to use this type of functionality.
Why this is a good thing
This pattern has caught on so well because it opens programs up for extensibility. It means that this type of desirable functionality is possible with the smallest amount of changes, should the author desire to implement this kind of functionality. And I mean, come on. You change one word in a config file and completely alter the program flow, without editing a single line of code. That is desirable.
In many cases it doesn't really matter but is considered good style.
You limit the information provided to users of the reference to what is nessary, i.e. that it is an instance of type SuperClass. It doesn't (and shouldn't) matter whether the variable references an object of type SuperClass or SubClass.
Update:
This also is true for local variables that are never used as a parameter etc.
As I said, it often doesn't matter but is considered good style because you might later change the variable to hold a parameter or another sub type of the super type. In that case, if you used the sub type first, your further code (in that single scope, e.g. method) might accidentially rely on the API of one specific sub type and changing the variable to hold another type might break your code.
I'll expand on Chris' example:
Consider you have the following:
RunMode mode = new RunMode();
...
You might now rely on the fact that mode is a RunMode.
However, later you might want to change that line to:
RunMode mode = Config.getMode(); //breaks
Oops, that doesn't compile. Ok, let's change that.
Mode mode = Config.getMode();
That line would compile now, but your further code might break, because you accidentially relied to mode being an instance of RunMode. Note that it might compile but could break at runtime or screw your logic.
SuperClass instance = new SubClass1()
after some lines, you may do instance = new SubClass2();
But if you write, SubClass1 instance = new SubClass1();
after some lines, you can't do instance = new SubClass2()
It is called polymorphis and it is superclass reference to a subclass object.
In fact, AFAIK variable types are meaningless at runtime. They are used
only by the compiler!
Not sure where you read this from. At compile time compiler only know the class of the reference type(so super class in case of polymorphism as you have stated). At runtime java knows the actual type of Object(.getClass()). At compile time java compiler only checks if the invoked method definition is in the class of reference type. Which method to invoke(function overloading) is determined at runtime based on the actual type of the object.
Why polymorphism?
Well google to find more but here is an example. You have a common method draw(Shape s). Now shape can be a Rectangle, a Circle any CustomShape. If you dont use Shape reference in draw() method you will have to create different methods for each type of(subclasses) of shape.
This is from a design point of view, you will have one super class and there can be multiple subclasses where in you want to extend the functionality.
An implementer who will have to write a subclass need only to focus on which methods to override
The question about access to protected member in Java was already asked and answered a lot of times, for example:
Java: protected access across packages
But I can't understand why it is implemented this way, see explanation from "Java Programming Language" (4 ed.):
"The reasoning behind the restriction is this: Each subclass inherits the contract of the superclass and expands that contract in some way. Suppose that one subclass, as part of its expanded contract, places constraints on the values of protected members of the superclass. If a different subclass could access the protected members of objects of the first subclass then it could manipulate them in a way that would break the first subclass's contract and this should not be permissible."
OK, that's clear, but consider this inheritance structure (extract from some code):
package package1;
public class A {
protected int x;
}
package package2;
public class B extends A {
public static void main(String[] args)
C subclass = new C();
subclass.x = 7; // here any constraints can be broken - ??
}
}
class C extends B {
// class which places constraints on the value of protected member x
...
}
Here subclass.x = 7 is a valid statement which still can break a C's contract.
What am I missing?
Edited (added): Maybe I should not apply the cited logic in this situation? If we were dealing with only one package, the no restrictions exist at all. So maybe direct inheritance chain is treated in simplified way, meaning that superclass must know what it is doing...
It's ultimately all about following contracts, as stated in your posted quote. If you're really worried that someone won't read the contract, then there's a defensive programming solution to all this that introduces validation on modification.
By this I mean that the code you posted can break contract; this, however, couldn't:
public class A {
private int x;
protected final void setX(int x) throws IllegalArgumentException {
if (x < 0)
throw new IllegalArgumentException("x cannot be negative");
subValidateX(x);
this.x = x;
}
/**
* Subclasses that wish to provide extra validation should override this method
*/
protected void subValidateX(int x) {
// Intentional no-op
}
}
Here, I've done three major things:
I made x private so it can only be assigned from within A (excluding things like reflection, of course),
I made the setter final which prevents subclasses from overriding it and removing my validation, and
I made a protected method that can be overridden by subclasses to provide extra validation in addition to mine to make sure that subclasses can narrow requirements on x, but not widen them to include things like negative integers since my validation already checked that.
There are lots of good resources for how to design for inheritance in Java, especially when it comes to super-defensive protect-the-contract API programming like my example above. I'd recommend looking them up on your favorite search engine.
Ultimately, though, the developer writing the subclass needs to be responsible enough to read documentation, especially when you get into interface implementation.
Inherited classes are implicitly friends with their parent. So as soon as C is inherited from B, it is actually normal that B has the vision on C's x attribute.
Since C extends B, having
C c = new C();
c.x = 1;
is, with respect to your issue, exactly the same as
B b = new C();
b.x = 1;
Java compiler doesn't consider the runtime type of the object referred to by b and c in the above code; all it sees is the declared type, which is B and C, respectively. Now, since my second example obviously must work (the code in class B is accessing its own property, after all), it follows that the first example must work as well; otherwise it would mean that Java allows you to do less on a more specific type, which is a paradox.
Note that all the code is a simplified example in order to only communicate the core ideas of my question. It should all compile and run though, after slight editing.
I have several classes which all implement a common interface.
public interface Inter{}
public class Inter1 implements Inter{}
public class Inter2 implements Inter{}
In a separate class I have a list of type Inter, which I use to store and remove Inter1 and Inter2 types, based on user input.
java.util.ArrayList<Inter> inters = new java.util.ArrayList<Inter>();
I also have a family of overloaded methods, which deal with how each implementation interacts with each other, along with a default implementation for 2 "Inter"s.
void doSomething(Inter in1, Inter in2){
System.out.println("Inter/Inter");
}
void doSomething(Inter1 in1, Inter1 in2){
System.out.println("Inter1/Inter11");
}
void doSomething(Inter2 in1, Inter1 in2){
System.out.println("Inter2/Inter1");
}
The methods are periodically called like so:
for(int i = 0; i < inters.size() - 1; i++){
for(int o = i+1; o < inters.size(); o++){
Inter in1 = inters.get(i); Inter in2 = inters.get(o);
doSomething(in1.getClass().cast(in1), in2.getClass().cast(in2));
System.out.println("Class 1: " + in1.getClass().getName());
System.out.println("Class 2: " + in2.getClass().getName());
}
}
An example output from this is:
Inter/Inter
Class 1: Inter
Class 2: Inter
Inter/Inter
Class 1: Inter
Class 2: Inter1
Inter/Inter
Class 1: Inter1
Class 2: Inter1
Looking at the output, it is clear that doSomething(Inter in1, Inter in2) is called, even in cases when other methods should be called. Interestingly, the class names outputted are the correct ones.
Why does java have static method overloading when the class types are determined at runtime using reflection?
Is there any way to get Java to do this? I know I can use reflection and Class.getMethod() and method.invoke() to get the results I want, but it would be so much neater to do so with casting.
I realize that questions about similar concepts have been asked before, but while all of the answers were informative, none satisfied me.
Double dispatch looked like it would work, but that would mean reworking a lot of code, since I use this type of thing often.
It looks to me like we're talking about what's going on with:
doSomething(in1.getClass().cast(in1), in2.getClass().cast(in2));
Based on your surprise that the type that is being output is always Inter, it seems you're a little confused on what's going on here. In particular, you seem to think that in1.getClass().cast(in1) and in2.getClass().cast(in2) should be forcing a different overload because of their differing runtime type. However, this is wrong.
Method overload resolution happens statically. This means that it happens based on the declared types of the two arguments to the method. Since both in1 and in2 are both declared as Inter, the method chosen is obviously void doSomething(Inter in1, Inter in2).
The takeaway here is that in1 is declared as an Inter. This means that in1.getClass() is essentially the same as Inter.class for the purposes of static analysis -- getClass simply returns a Class<? extends Inter>. Therefore, the casts are useless, and you're only ever going to get the first overload.
The Java Language Specification (JLS) in section 15.12 Method Invocation Expression explains in detail the process that the compiler follows to choose the right method to invoke.
There, you will notice that this is a compile-time task. The JLS says in subsection 15.12.2:
This step uses the name of the method and the types of the argument expressions
to locate methods that are both accessible and applicable
There may be more than one such method, in which case the most specific one is chosen.
In your case, this means that since you are passing two objects of type Integer, the most specific method is the one that receives exactly that.
To verify the compile-time nature of this, you can do the following test.
Declare a class like this and compile it.
public class ChooseMethod {
public void doSomething(Number n){
System.out.println("Number");
}
}
Declare a second class that invokes a method of the first one and compile it.
public class MethodChooser {
public static void main(String[] args) {
ChooseMethod m = new ChooseMethod();
m.doSomething(10);
}
}
If you invoke the main, the output says Number.
Now, add a second more specific method to the ChooseMethod class, and recompile it (but do not recompile the other class).
public void doSomething(Integer i) {
System.out.println("Integer");
}
If you run the main again, the output is still Number.
Basically, because it was decided at compile time. If you recompile the MethodChooser class (the one with the main), and run the program again, the output will be Integer.
As such, if you want to force the selection of one of the overloaded methods, the type of the arguments must correspond with the type of the parameters at compile time, and not only at run time as you seem to expect in this exercise.
In Java, it is perfectly legal to define final arguments in interface methods and do not obey that in the implementing class, e.g.:
public interface Foo {
public void foo(int bar, final int baz);
}
public class FooImpl implements Foo {
#Override
public void foo(final int bar, int baz) {
...
}
}
In the above example, bar and baz has the opposite final definitions in the class VS the interface.
In the same fashion, no final restrictions are enforced when one class method extends another, either abstract or not.
While final has some practical value inside the class method body, is there any point specifying final for interface method parameters?
It doesn't seem like there's any point to it. According to the Java Language Specification 4.12.4:
Declaring a variable final can serve
as useful documentation that its value
will not change and can help avoid
programming errors.
However, a final modifier on a method parameter is not mentioned in the rules for matching signatures of overridden methods, and it has no effect on the caller, only within the body of an implementation. Also, as noted by Robin in a comment, the final modifier on a method parameter has no effect on the generated byte code. (This is not true for other uses of final.)
Some IDEs will copy the signature of the abstract/interface method when inserting an implementing method in a sub class.
I don't believe it makes any difference to the compiler.
EDIT: While I believe this was true in the past, I don't think current IDEs do this any more.
Final annotations of method parameters are always only relevant to the method implementation never to the caller. Therefore, there is no real reason to use them in interface method signatures. Unless you want to follow the same consistent coding standard, which requires final method parameters, in all method signatures. Then it is nice to be able to do so.
Update: Original answer below was written without fully understanding the question, and therefore does not directly address the question :) Nevertheless, it must be informative for those looking to understand the general use of final keyword.
As for the question, I would like to quote my own comment from below.
I believe you're not forced to implement the finality of an argument to leave you free to decide whether it should be final or not in your own implementation.
But yes, it sounds rather odd that you can declare it final in the interface, but have it non-final in the implementation. It would have made more sense if either:
a. final keyword was not allowed for interface (abstract) method arguments (but you can use it in implementation), or
b. declaring an argument as final in interface would force it to be declared final in implementation (but not forced for non-finals).
I can think of two reasons why a method signature can have final parameters: Beans and Objects (Actually, they are both the same reason, but slightly different contexts.)
Objects:
public static void main(String[] args) {
StringBuilder cookingPot = new StringBuilder("Water ");
addVegetables(cookingPot);
addChicken(cookingPot);
System.out.println(cookingPot.toString());
// ^--- OUTPUT IS: Water Carrot Broccoli Chicken ChickenBroth
// We forgot to add cauliflower. It went into the wrong pot.
}
private static void addVegetables(StringBuilder cookingPot) {
cookingPot.append("Carrot ");
cookingPot.append("Broccoli ");
cookingPot = new StringBuilder(cookingPot.toString());
// ^--- Assignment allowed...
cookingPot.append("Cauliflower ");
}
private static void addChicken(final StringBuilder cookingPot) {
cookingPot.append("Chicken ");
//cookingPot = new StringBuilder(cookingPot.toString());
// ^---- COMPILATION ERROR! It is final.
cookingPot.append("ChickenBroth ");
}
The final keyword ensured that we will not accidentally create a new local cooking pot by showing a compilation error when we attempted to do so. This ensured the chicken broth is added to our original cooking pot which the addChicken method got. Compare this to addVegetables where we lost the cauliflower because it added that to a new local cooking pot instead of the original pot it got.
Beans:
It is the same concept as objects (as shown above). Beans are essentially Objects in Java. However, beans (JavaBeans) are used in various applications as a convenient way to store and pass around a defined collection of related data. Just as the addVegetables could mess up the cooking process by creating a new cooking pot StringBuilder and throwing it away with the cauliflower, it could also do the same with a cooking pot JavaBean.
I believe it may be a superfluous detail, as whether it's final or not is an implementation detail.
(Sort of like declaring methods/members in an interface as public.)