Related
Are there currently (Java 6) things you can do in Java bytecode that you can't do from within the Java language?
I know both are Turing complete, so read "can do" as "can do significantly faster/better, or just in a different way".
I'm thinking of extra bytecodes like invokedynamic, which can't be generated using Java, except that specific one is for a future version.
After working with Java byte code for quite a while and doing some additional research on this matter, here is a summary of my findings:
Execute code in a constructor before calling a super constructor or auxiliary constructor
In the Java programming language (JPL), a constructor's first statement must be an invocation of a super constructor or another constructor of the same class. This is not true for Java byte code (JBC). Within byte code, it is absolutely legitimate to execute any code before a constructor, as long as:
Another compatible constructor is called at some time after this code block.
This call is not within a conditional statement.
Before this constructor call, no field of the constructed instance is read and none of its methods is invoked. This implies the next item.
Set instance fields before calling a super constructor or auxiliary constructor
As mentioned before, it is perfectly legal to set a field value of an instance before calling another constructor. There even exists a legacy hack which makes it able to exploit this "feature" in Java versions before 6:
class Foo {
public String s;
public Foo() {
System.out.println(s);
}
}
class Bar extends Foo {
public Bar() {
this(s = "Hello World!");
}
private Bar(String helper) {
super();
}
}
This way, a field could be set before the super constructor is invoked which is however not longer possible. In JBC, this behavior can still be implemented.
Branch a super constructor call
In Java, it is not possible to define a constructor call like
class Foo {
Foo() { }
Foo(Void v) { }
}
class Bar() {
if(System.currentTimeMillis() % 2 == 0) {
super();
} else {
super(null);
}
}
Until Java 7u23, the HotSpot VM's verifier did however miss this check which is why it was possible. This was used by several code generation tools as a sort of a hack but it is not longer legal to implement a class like this.
The latter was merely a bug in this compiler version. In newer compiler versions, this is again possible.
Define a class without any constructor
The Java compiler will always implement at least one constructor for any class. In Java byte code, this is not required. This allows the creation of classes that cannot be constructed even when using reflection. However, using sun.misc.Unsafe still allows for the creation of such instances.
Define methods with identical signature but with different return type
In the JPL, a method is identified as unique by its name and its raw parameter types. In JBC, the raw return type is additionally considered.
Define fields that do not differ by name but only by type
A class file can contain several fields of the same name as long as they declare a different field type. The JVM always refers to a field as a tuple of name and type.
Throw undeclared checked exceptions without catching them
The Java runtime and the Java byte code are not aware of the concept of checked exceptions. It is only the Java compiler that verifies that checked exceptions are always either caught or declared if they are thrown.
Use dynamic method invocation outside of lambda expressions
The so-called dynamic method invocation can be used for anything, not only for Java's lambda expressions. Using this feature allows for example to switch out execution logic at runtime. Many dynamic programming languages that boil down to JBC improved their performance by using this instruction. In Java byte code, you could also emulate lambda expressions in Java 7 where the compiler did not yet allow for any use of dynamic method invocation while the JVM already understood the instruction.
Use identifiers that are not normally considered legal
Ever fancied using spaces and a line break in your method's name? Create your own JBC and good luck for code review. The only illegal characters for identifiers are ., ;, [ and /. Additionally, methods that are not named <init> or <clinit> cannot contain < and >.
Reassign final parameters or the this reference
final parameters do not exist in JBC and can consequently be reassigned. Any parameter, including the this reference is only stored in a simple array within the JVM what allows to reassign the this reference at index 0 within a single method frame.
Reassign final fields
As long as a final field is assigned within a constructor, it is legal to reassign this value or even not assign a value at all. Therefore, the following two constructors are legal:
class Foo {
final int bar;
Foo() { } // bar == 0
Foo(Void v) { // bar == 2
bar = 1;
bar = 2;
}
}
For static final fields, it is even allowed to reassign the fields outside of
the class initializer.
Treat constructors and the class initializer as if they were methods
This is more of a conceptional feature but constructors are not treated any differently within JBC than normal methods. It is only the JVM's verifier that assures that constructors call another legal constructor. Other than that, it is merely a Java naming convention that constructors must be called <init> and that the class initializer is called <clinit>. Besides this difference, the representation of methods and constructors is identical. As Holger pointed out in a comment, you can even define constructors with return types other than void or a class initializer with arguments, even though it is not possible to call these methods.
Create asymmetric records*.
When creating a record
record Foo(Object bar) { }
javac will generate a class file with a single field named bar, an accessor method named bar() and a constructor taking a single Object. Additionally, a record attribute for bar is added. By manually generating a record, it is possible to create, a different constructor shape, to skip the field and to implement the accessor differently. At the same time, it is still possible to make the reflection API believe that the class represents an actual record.
Call any super method (until Java 1.1)
However, this is only possible for Java versions 1 and 1.1. In JBC, methods are always dispatched on an explicit target type. This means that for
class Foo {
void baz() { System.out.println("Foo"); }
}
class Bar extends Foo {
#Override
void baz() { System.out.println("Bar"); }
}
class Qux extends Bar {
#Override
void baz() { System.out.println("Qux"); }
}
it was possible to implement Qux#baz to invoke Foo#baz while jumping over Bar#baz. While it is still possible to define an explicit invocation to call another super method implementation than that of the direct super class, this does no longer have any effect in Java versions after 1.1. In Java 1.1, this behavior was controlled by setting the ACC_SUPER flag which would enable the same behavior that only calls the direct super class's implementation.
Define a non-virtual call of a method that is declared in the same class
In Java, it is not possible to define a class
class Foo {
void foo() {
bar();
}
void bar() { }
}
class Bar extends Foo {
#Override void bar() {
throw new RuntimeException();
}
}
The above code will always result in a RuntimeException when foo is invoked on an instance of Bar. It is not possible to define the Foo::foo method to invoke its own bar method which is defined in Foo. As bar is a non-private instance method, the call is always virtual. With byte code, one can however define the invocation to use the INVOKESPECIAL opcode which directly links the bar method call in Foo::foo to Foo's version. This opcode is normally used to implement super method invocations but you can reuse the opcode to implement the described behavior.
Fine-grain type annotations
In Java, annotations are applied according to their #Target that the annotations declares. Using byte code manipulation, it is possible to define annotations independently of this control. Also, it is for example possible to annotate a parameter type without annotating the parameter even if the #Target annotation applies to both elements.
Define any attribute for a type or its members
Within the Java language, it is only possible to define annotations for fields, methods or classes. In JBC, you can basically embed any information into the Java classes. In order to make use of this information, you can however no longer rely on the Java class loading mechanism but you need to extract the meta information by yourself.
Overflow and implicitly assign byte, short, char and boolean values
The latter primitive types are not normally known in JBC but are only defined for array types or for field and method descriptors. Within byte code instructions, all of the named types take the space 32 bit which allows to represent them as int. Officially, only the int, float, long and double types exist within byte code which all need explicit conversion by the rule of the JVM's verifier.
Not release a monitor
A synchronized block is actually made up of two statements, one to acquire and one to release a monitor. In JBC, you can acquire one without releasing it.
Note: In recent implementations of HotSpot, this instead leads to an IllegalMonitorStateException at the end of a method or to an implicit release if the method is terminated by an exception itself.
Add more than one return statement to a type initializer
In Java, even a trivial type initializer such as
class Foo {
static {
return;
}
}
is illegal. In byte code, the type initializer is treated just as any other method, i.e. return statements can be defined anywhere.
Create irreducible loops
The Java compiler converts loops to goto statements in Java byte code. Such statements can be used to create irreducible loops, which the Java compiler never does.
Define a recursive catch block
In Java byte code, you can define a block:
try {
throw new Exception();
} catch (Exception e) {
<goto on exception>
throw Exception();
}
A similar statement is created implicitly when using a synchronized block in Java where any exception while releasing a monitor returns to the instruction for releasing this monitor. Normally, no exception should occur on such an instruction but if it would (e.g. the deprecated ThreadDeath), the monitor would still be released.
Call any default method
The Java compiler requires several conditions to be fulfilled in order to allow a default method's invocation:
The method must be the most specific one (must not be overridden by a sub interface that is implemented by any type, including super types).
The default method's interface type must be implemented directly by the class that is calling the default method. However, if interface B extends interface A but does not override a method in A, the method can still be invoked.
For Java byte code, only the second condition counts. The first one is however irrelevant.
Invoke a super method on an instance that is not this
The Java compiler only allows to invoke a super (or interface default) method on instances of this. In byte code, it is however also possible to invoke the super method on an instance of the same type similar to the following:
class Foo {
void m(Foo f) {
f.super.toString(); // calls Object::toString
}
public String toString() {
return "foo";
}
}
Access synthetic members
In Java byte code, it is possible to access synthetic members directly. For example, consider how in the following example the outer instance of another Bar instance is accessed:
class Foo {
class Bar {
void bar(Bar bar) {
Foo foo = bar.Foo.this;
}
}
}
This is generally true for any synthetic field, class or method.
Define out-of-sync generic type information
While the Java runtime does not process generic types (after the Java compiler applies type erasure), this information is still attcheched to a compiled class as meta information and made accessible via the reflection API.
The verifier does not check the consistency of these meta data String-encoded values. It is therefore possible to define information on generic types that does not match the erasure. As a concequence, the following assertings can be true:
Method method = ...
assertTrue(method.getParameterTypes() != method.getGenericParameterTypes());
Field field = ...
assertTrue(field.getFieldType() == String.class);
assertTrue(field.getGenericFieldType() == Integer.class);
Also, the signature can be defined as invalid such that a runtime exception is thrown. This exception is thrown when the information is accessed for the first time as it is evaluated lazily. (Similar to annotation values with an error.)
Append parameter meta information only for certain methods
The Java compiler allows for embedding parameter name and modifier information when compiling a class with the parameter flag enabled. In the Java class file format, this information is however stored per-method what makes it possible to only embed such method information for certain methods.
Mess things up and hard-crash your JVM
As an example, in Java byte code, you can define to invoke any method on any type. Usually, the verifier will complain if a type does not known of such a method. However, if you invoke an unknown method on an array, I found a bug in some JVM version where the verifier will miss this and your JVM will finish off once the instruction is invoked. This is hardly a feature though, but it is technically something that is not possible with javac compiled Java. Java has some sort of double validation. The first validation is applied by the Java compiler, the second one by the JVM when a class is loaded. By skipping the compiler, you might find a weak spot in the verifier's validation. This is rather a general statement than a feature, though.
Annotate a constructor's receiver type when there is no outer class
Since Java 8, non-static methods and constructors of inner classes can declare a receiver type and annotate these types. Constructors of top-level classes cannot annotate their receiver type as they most not declare one.
class Foo {
class Bar {
Bar(#TypeAnnotation Foo Foo.this) { }
}
Foo() { } // Must not declare a receiver type
}
Since Foo.class.getDeclaredConstructor().getAnnotatedReceiverType() does however return an AnnotatedType representing Foo, it is possible to include type annotations for Foo's constructor directly in the class file where these annotations are later read by the reflection API.
Use unused / legacy byte code instructions
Since others named it, I will include it as well. Java was formerly making use of subroutines by the JSR and RET statements. JBC even knew its own type of a return address for this purpose. However, the use of subroutines did overcomplicate static code analysis which is why these instructions are not longer used. Instead, the Java compiler will duplicate code it compiles. However, this basically creates identical logic which is why I do not really consider it to achieve something different. Similarly, you could for example add the NOOP byte code instruction which is not used by the Java compiler either but this would not really allow you to achieve something new either. As pointed out in the context, these mentioned "feature instructions" are now removed from the set of legal opcodes which does render them even less of a feature.
As far as I know there are no major features in the bytecodes supported by Java 6 that are not also accessible from Java source code. The main reason for this is obviously that the Java bytecode was designed with the Java language in mind.
There are some features that are not produced by modern Java compilers, however:
The ACC_SUPER flag:
This is a flag that can be set on a class and specifies how a specific corner case of the invokespecial bytecode is handled for this class. It is set by all modern Java compilers (where "modern" is >= Java 1.1, if I remember correctly) and only ancient Java compilers produced class files where this was un-set. This flag exists only for backwards-compatibility reasons. Note that starting with Java 7u51, ACC_SUPER is ignored completely due to security reasons.
The jsr/ret bytecodes.
These bytecodes were used to implement sub-routines (mostly for implementing finally blocks). They are no longer produced since Java 6. The reason for their deprecation is that they complicate static verification a lot for no great gain (i.e. code that uses can almost always be re-implemented with normal jumps with very little overhead).
Having two methods in a class that only differ in return type.
The Java language specification does not allow two methods in the same class when they differ only in their return type (i.e. same name, same argument list, ...). The JVM specification however, has no such restriction, so a class file can contain two such methods, there's just no way to produce such a class file using the normal Java compiler. There's a nice example/explanation in this answer.
Here are some features that can be done in Java bytecode but not in Java source code:
Throwing a checked exception from a method without declaring that the method throws it. The checked and unchecked exceptions are a thing which is checked only by the Java compiler, not the JVM. Because of this for example Scala can throw checked exceptions from methods without declaring them. Though with Java generics there is a workaround called sneaky throw.
Having two methods in a class that only differ in return type, as already mentioned in Joachim's answer: The Java language specification does not allow two methods in the same class when they differ only in their return type (i.e. same name, same argument list, ...). The JVM specification however, has no such restriction, so a class file can contain two such methods, there's just no way to produce such a class file using the normal Java compiler. There's a nice example/explanation in this answer.
GOTO can be used with labels to create your own control structures (other than for while etc)
You can override the this local variable inside a method
Combining both of these you can create create tail call optimised bytecode (I do this in JCompilo)
As a related point you can get parameter name for methods if compiled with debug (Paranamer does this by reading the bytecode
Maybe section 7A in this document is of interest, although it's about bytecode pitfalls rather than bytecode features.
In Java language the first statement in a constructor must be a call to the super class constructor. Bytecode does not have this limitation, instead the rule is that the super class constructor or another constructor in the same class must be called for the object before accessing the members. This should allow more freedom such as:
Create an instance of another object, store it in a local variable (or stack) and pass it as a parameter to super class constructor while still keeping the reference in that variable for other use.
Call different other constructors based on a condition. This should be possible: How to call a different constructor conditionally in Java?
I have not tested these, so please correct me if I'm wrong.
Something you can do with byte code, rather than plain Java code, is generate code which can loaded and run without a compiler. Many systems have JRE rather than JDK and if you want to generate code dynamically it may be better, if not easier, to generate byte code instead of Java code has to be compiled before it can be used.
I wrote a bytecode optimizer when I was a I-Play, (it was designed to reduce the code size for J2ME applications). One feature I added was the ability to use inline bytecode (similar to inline assembly language in C++). I managed to reduce the size of a function that was part of a library method by using the DUP instruction, since I need the value twice. I also had zero byte instructions (if you are calling a method that takes a char and you want to pass an int, that you know does not need to be cast I added int2char(var) to replace char(var) and it would remove the i2c instruction to reduce the size of the code. I also made it do float a = 2.3; float b = 3.4; float c = a + b; and that would be converted to fixed point (faster, and also some J2ME did not support floating point).
In Java, if you attempt to override a public method with a protected method (or any other reduction in access), you get an error: "attempting to assign weaker access privileges". If you do it with JVM bytecode, the verifier is fine with it, and you can call these methods via the parent class as if they were public.
I know this question has been asked a lot, but the usual answers are far from satisfying in my view.
given the following class hierarchy:
class SuperClass{}
class SubClass extends SuperClass{}
why does people use this pattern to instantiate SubClass:
SuperClass instance = new SubClass();
instead of this one:
SubClass instance = new SubClass();
Now, the usual answer I see is that this is in order to send instance as an argument to a method that requires an instance of SuperClass like here:
void aFunction(SuperClass param){}
//somewhere else in the code...
...
aFunction(instance);
...
But I can send an instance of SubClass to aFunction regardless of the type of variable that held it! meaning the following code will compile and run with no errors (assuming the previously provided definition of aFunction):
SubClass instance = new SubClass();
aFunction(instance);
In fact, AFAIK variable types are meaningless at runtime. They are used only by the compiler!
Another possible reason to define a variable as SuperClass would be if it had several different subclasses and the variable is supposed to switch it's reference to several of them at runtime, but I for example only saw this happen in class (not super, not sub. just class). Definitly not sufficient to require a general pattern...
The main argument for this type of coding is because of the Liskov Substituion Principle, which states that if X is a subtype of type T, then any instance of T should be able to be swapped out with X.
The advantage of this is simple. Let's say we've got a program that has a properties file, that looks like this:
mode="Run"
And your program looks like this:
public void Program
{
public Mode mode;
public static void main(String[] args)
{
mode = Config.getMode();
mode.run();
}
}
So briefly, this program is going to use the config file to define the mode this program is going to boot up in. In the Config class, getMode() might look like this:
public Mode getMode()
{
String type = getProperty("mode"); // Now equals "Run" in our example.
switch(type)
{
case "Run": return new RunMode();
case "Halt": return new HaltMode();
}
}
Why this wouldn't work otherwise
Now, because you have a reference of type Mode, you can completely change the functionality of your program with simply changing the value of the mode property. If you had public RunMode mode, you would not be able to use this type of functionality.
Why this is a good thing
This pattern has caught on so well because it opens programs up for extensibility. It means that this type of desirable functionality is possible with the smallest amount of changes, should the author desire to implement this kind of functionality. And I mean, come on. You change one word in a config file and completely alter the program flow, without editing a single line of code. That is desirable.
In many cases it doesn't really matter but is considered good style.
You limit the information provided to users of the reference to what is nessary, i.e. that it is an instance of type SuperClass. It doesn't (and shouldn't) matter whether the variable references an object of type SuperClass or SubClass.
Update:
This also is true for local variables that are never used as a parameter etc.
As I said, it often doesn't matter but is considered good style because you might later change the variable to hold a parameter or another sub type of the super type. In that case, if you used the sub type first, your further code (in that single scope, e.g. method) might accidentially rely on the API of one specific sub type and changing the variable to hold another type might break your code.
I'll expand on Chris' example:
Consider you have the following:
RunMode mode = new RunMode();
...
You might now rely on the fact that mode is a RunMode.
However, later you might want to change that line to:
RunMode mode = Config.getMode(); //breaks
Oops, that doesn't compile. Ok, let's change that.
Mode mode = Config.getMode();
That line would compile now, but your further code might break, because you accidentially relied to mode being an instance of RunMode. Note that it might compile but could break at runtime or screw your logic.
SuperClass instance = new SubClass1()
after some lines, you may do instance = new SubClass2();
But if you write, SubClass1 instance = new SubClass1();
after some lines, you can't do instance = new SubClass2()
It is called polymorphis and it is superclass reference to a subclass object.
In fact, AFAIK variable types are meaningless at runtime. They are used
only by the compiler!
Not sure where you read this from. At compile time compiler only know the class of the reference type(so super class in case of polymorphism as you have stated). At runtime java knows the actual type of Object(.getClass()). At compile time java compiler only checks if the invoked method definition is in the class of reference type. Which method to invoke(function overloading) is determined at runtime based on the actual type of the object.
Why polymorphism?
Well google to find more but here is an example. You have a common method draw(Shape s). Now shape can be a Rectangle, a Circle any CustomShape. If you dont use Shape reference in draw() method you will have to create different methods for each type of(subclasses) of shape.
This is from a design point of view, you will have one super class and there can be multiple subclasses where in you want to extend the functionality.
An implementer who will have to write a subclass need only to focus on which methods to override
Consider this code (complete class, runs fine, all classes in one class for the sake of brevity).
My questions are after the code listing:
import java.util.LinkedList;
import java.util.List;
class Gadget {
public void switchon() {
System.out.println("Gadget is Switching on!");
}
}
interface switchonable {
void switchon();
}
class Smartphone extends Gadget implements switchonable {
#Override
public void switchon() {
System.out.println("Smartphone is switching on!");
}
}
class DemoPersonnel {
public void demo(Gadget g) {
System.out.println("Demoing a gadget");
}
public void demo(Smartphone s) {
System.out.println("Demoing a smartphone");
}
}
public class DT {
/**
* #param args
*/
public static void main(String[] args) {
List<Gadget> l = new LinkedList<Gadget>();
l.add(new Gadget());
l.add(new Smartphone());
for (Gadget gadget : l) {
gadget.switchon();
}
DemoPersonnel p = new DemoPersonnel();
for (Gadget gadget : l) {
p.demo(gadget);
}
}
}
Questions:
From the compilers point of view, what is the origin of the switchon method in Smartphone? Is it inherited from the base class Gadget? Or is it an implementation of the switchon method mandated by the switchonable interface? Does the annotation make any difference here?
In the main method, first loop: Here, we see a case of runtime polymorphism - i.e., when the first for loop is running, and gadget.switchon() is called, it first prints "Gadget is switching on", and then it prints "Smartphone is switching on". But in the second loop, this runtime resolution does not happen, and the output for both calls to demo is "Demoing a gadget", whereas I was expecting it to print "Demoing a gadget" the first iteration, and "Demoing a smartphone" the second time.
What am I understanding wrong? Why does the runtime resolve the child class in the first for loop, but doesn't do so in the second for loop?
Lastly, a link to a lucid tutorial on runtime/compile-time polymorphism in Java will be appreciated. (Please do not post the Java tutorial trail links, I didn't find the material particularly impressive when discussing the finer nuances in respectable depth).
This is how it works shortly:
Compiling time
The compiler defines the required signature for the requested method
Once the signature is defined, the compiler starts to look for it in the type-Class
If it finds any compatible candidate method with the required signature proceeds, otherwise returns an error
Runtime
During execution JVM starts to look for the candidate method with the signature as exactly defined during the compiling-time.
The search for the executable method actually starts from the real Object implementation Class (which can be a subclass of the type-Class) and surf the whole hierarchy up.
Your List is defined with type Gadget.
for (Gadget gadget : l) {
gadget.switchon();
}
When you ask for gadget.switchon(); the compiler will look for the switchon() method in the Gadget class and as it's there the candidate signature is simply confirmed to be switchon().
During the execution, the JVM will look for a switchon() method from the Smartphone Class and this is why it is displaying the correct message.
Here is what happens in the second for-loop
DemoPersonnel p = new DemoPersonnel();
for (Gadget gadget : l) {
p.demo(gadget);
}
The signature in this case is for both objects demo(Gadget g), this is why for both iterations method demo(Gadget g) is executed.
Hope it helps!
From the compilers point of view, what is the origin of the switchon method in Smartphone? Is it inherited from the base class Gadget? Or is it an implementation of the switchon method mandated by the switchonable interface?
The second case
Does the annotation make any difference here?
Not at all, #Override is just a helper, whe you use it you are telling the compiler: "my intention is to override the method from a supertype, please throw an exception and don't compile this if it is not overriding anything"
About the second question, in this case the method that better match acording to its signature is the one to be called. At run time in the second loop your objects have the supertype "associated", that's the reason public void demo(Gadget g) will be called rather than public void demo(Smartphone g)
1.It shouldn't matter. Because it is extending Gadget, if you don't override and call switchon() from a smartphone, it would say "Gadget is switching on!". When you have both an interface and a parent class with the same method, it really doesn't matter.
2.The first loop works and the second doesn't because of the way java looks at objects. When you call a method from an object, it takes the method directly from that object, and thus knows whether smartphone or gadget. When you send either a Smartphone or Gadget into an overloaded method, everything in that class is called a Gadget, whether it is actually a smartphone or not. Because of this, it uses the gadget method. To make this work, you would want to use this in the demo(Gadget g) method of DemoPersonnel:
if(gadget instanceof Smartphone){
System.out.println("Demoing a gadget");
}else{
System.out.println("Demoing a smartphone");
}
Sorry I don't have a link to a tutorial, I learned through a combination of AP Computer Science and experience.
Answering question 2 first: In the second loop your passing an object typed as a Gadget therefore the best match in the demo class is the method taking a gadget. this is resolved a compile time.
for question 1: the annotation does not make a difference. it just indicates that you overriding (implementing) method in the interface.
For compiler, Smartphone inherits switchon() method "implementation" from Gadget and then Smartphone overrides inherited implementation with its own implementation. On the other hand switchonable interface dictates Smartphone to provide an implementation of switchon() method definition and which was fulfilled by the implementation overridden in Smartphone.
First case is working as you expected because it is indeed a case of polymorphism i.e. you have one contract and two implementations - one in Gadget and another in Smartphone; where later has "overridden" the former implementation. Second case "should not" work as you expect it to, because there's only one contract and one implementation. Do note that you are "not overriding" the demo() method, you are actually "overloading" the demo() method. And, overloading means two "different" unique method definitions that only shares the "same name". So, it is a case of one contract and one implementation, when you call demo() with Gadget parameter, because compiler will match the method name with exact method parameter type(s) and by doing so will call "different methods" in both iterations of the loop.
about the second question :
In Java, dynamic method dispatch happens only for the object the method is called on, not for the parameter types of overloaded methods.
Here is a link to the java language specification.
As it says :
When a method is invoked (§15.12), the number of actual arguments (and
any explicit type arguments) and the compile-time types of the
arguments are used, at compile time, to determine the signature of the
method that will be invoked (§15.12.2). If the method that is to be
invoked is an instance method, the actual method to be invoked will be
determined at run time, using dynamic method lookup (§15.12.4).
so basically :the compile time type of the method parameters is used to determine the signature of the method to be called
At runtime, the class of the object the method is called on determines which implementation of that method is called, taking into account that it may be an instance of a subclass of the declared type that overrides the method.
In your case when you create object of a child class by new child(); and pass it on to the overloaded method, it has superclass type associated. Hence overloaded method with parent's object is called.
The annotation does not make any difference here. Techinically is like you are doing both things: overriding parent switchon() and implementing the switchon() interface method in one shot.
Method lookup (with respect to method arguments) is not done dynamically (at runtime) but statically at compile-time. Looks strange but thats how it works.
Hope this helps.
The compiler selects a method signature based on the declared type of the "this" pointer for the method and the declared type of the parameters. So since switchon receives a "this" pointer of Gadget, that is the version of the method that the compiler will reference in its generated code. Of course, runtime polymorphism can change that.
But runtime polymorphism only applies to the method's "this" pointer, not the parms, so the compiler's choice of method signature will "rule" in the second case.
I do know the syntactical difference between overriding and overloading. And I also know that overriding is run-time polymorphism and overloading is compile-time polymorphism. But my question is: "Is overloading is really compile-time polymorphism? Is the method call really solving at compile time?". To clarify my point, let's consider an example class.
public class Greeter {
public void greetMe() {
System.out.println("Hello");
}
public void greetMe(String name) {
System.out.println("Hello " + name);
}
public void wishLuck() {
System.out.println("Good Luck");
}
}
Since all of the methods greetMe(), greetMe(String name), wishLuck() are public, they all can be overriden(including overloaded one), right? For example,
public class FancyGreeter extends Greeter {
public void greetMe() {
System.out.println("***********");
System.out.println("* Hello *");
System.out.println("***********");
}
}
Now, consider the following snippet:
Greeter greeter = GreeterFactory.getRandomGreeter();
greeter.greetMe();
The getRandomGreeter() method returns a random Greeter object. It may either return an object of Greeter, or any of its subclasses, like FancyGreeter or GraphicalGreeter or any other one. The getRandomGreeter() will create the objects either using new or dynamically load the class file and create object using reflection(I think it is possible with reflection) or any other way that is possible. All of these methods of Greeter may or may not be overriden in subclasses. So the compiler has no way to know whether a particular method(overloaded or not) is overriden. Right? Also, wikipedia says on Virtual functions:
In Java, all non-static methods are by default "virtual functions".
Only methods marked with the keyword final, which cannot be overridden,
along with private methods, which are not inherited, are non-virtual.
Since, virtual functions are resolved at run-time using dynamic method dispatch, and since all non private, non final methods are virtual(whether overloaded or not), they must be resolved at run-time. Right?
Then, How can overloading still be resolved at compile-time? Or, is there anything that I misunderstood, or am I missing?
Every 'Greeter' class has 3 virtual methods: void greetMe(), void greetMe(String), and void wishLuck().
When you call greeter.greetMe() the compiler can work out which one of the three virtual methods should be called from the method signature - ie. the void greetMe() one since it accepts no arguments. Which specific implementation of the void greetMe() method is called depends on the type of the greeter instance, and is resolved at run-time.
In your example it's trivial for the compiler to work out which method to call, since the method signatures are all completely different. A slightly better example for showing the 'compile time polymorphism' concept might be as follows:
class Greeter {
public void greetMe(Object obj) {
System.out.println("Hello Object!");
}
public void greetMe(String str) {
System.out.println("Hello String!");
}
}
Using this greeter class will give the following results:
Object obj = new Object();
String str = "blah";
Object strAsObj = str;
greeter.greetMe(obj); // prints "Hello Object!"
greeter.greetMe(str); // prints "Hello String!"
greeter.greetMe(strAsObj); // prints "Hello Object!"
The compiler will pick out the method with the most specific match using the compile-time type, which is why the 2nd example works and calls the void greetMe(String) method.
The last call is the most interesting one: Even though the run-time type of strAsObj is String, it has been cast as an Object so that's how the compiler sees it. So, the closest match the compiler can find for that call is the void greetMe(Object) method.
Overloaded methods can still be overridden, if that is what you ask.
Overloaded methods are like different families, even though they share the same name. The compiler statically chooses one family given the signature, and then at run time it is dispatched to the most specific method in the class hierarchy.
That is, method dispatching is performed in two steps:
The first one is done at compile time with the static information available, the compiler will emit a call for the signature that matches best your current method parameters among the list of overloaded methods in the declared type of the object the method is invoked upon.
The second step is performed at run time, given the method signature that should be called (previous step, remember?), the JVM will dispatch it to the most concrete overridden version in the actual type of receiver object.
If the method arguments types are not covariant at all, overloading is equivalent to having methods names mangled at compile time; because they are effectively different methods, the JVM won't never ever dispatch them interchangeably depending on the type of the receiver.
What is polymorphism?
Acc. to me: if an entity can be represented in more than one forms, that entity is said to exhibit polymorphism.
Now, lets apply this definition to Java constructs:
1) Operator overloading is compile time polymorphism.
For example, + operator can be used to add two numbers OR to concatenate two strings. it's an example of polymorphism strictly saying compile-time polymorphism.
2) Method overloading is compile time polymorphism.
For example, a method with same name can have more than one implemntations. it's also a compile-time polymorphism.
It's compile-time because before execution of program compiler decides the flow of program i.e which form will be used during run-time.
3) Method overriding is run-time polymorphism.
For example, a method with same signature can have more than one implemenations. it's a run time polymorphism.
4) Base class use in place of derived class is run time polymorphism.
For example, an interface reference can point to any of it's implementor.
It's run-time because the flow of program can't be known before execution i.e. only during run-time it can be decided that which form will be used.
I hope it clears a bit.
Overloading in this respect means that the type of the function is statically determined at compile time as opposed to dynamic dispatch.
What really happens behind the scenes is that for a method named "foo" with types "A" and "B" two methods are created ("foo_A" and "foo_B"). Which of them is to be called is determined at compile-time (foo((A) object) or foo((B) object) result in foo_A being called or foo_B). So in a way this is compile-time polymorphism, although the real method (i.e. which implementation in the class hierarchy to take) is determined at runtime.
I have strong objection to call method overloading as compile time polymorphism.
I agree that method overloading is static binding(compile time) but i didn't see polymorphism in that.
I tried to put my opinion in my question to get clarification. you can refer this link.
We always say that method overloading is static polymorphism and overriding is runtime polymorphism. What exactly do we mean by static here? Is the call to a method resolved on compiling the code? So whats the difference between normal method call and calling a final method? Which one is linked at compile time?
Method overloading means making multiple versions of a function based on the inputs. For example:
public Double doSomething(Double x) { ... }
public Object doSomething(Object y) { ... }
The choice of which method to call is made at compile time. For example:
Double obj1 = new Double();
doSomething(obj1); // calls the Double version
Object obj2 = new Object();
doSomething(obj2); // calls the Object version
Object obj3 = new Double();
doSomething(obj3); // calls the Object version because the compilers see the
// type as Object
// This makes more sense when you consider something like
public void myMethod(Object o) {
doSomething(o);
}
myMethod(new Double(5));
// inside the call to myMethod, it sees only that it has an Object
// it can't tell that it's a Double at compile time
Method Overriding means defining a new version of the method by a subclass of the original
class Parent {
public void myMethod() { ... }
}
class Child extends Parent {
#Override
public void myMethod() { ... }
}
Parent p = new Parent();
p.myMethod(); // calls Parent's myMethod
Child c = new Child();
c.myMethod(); // calls Child's myMethod
Parent pc = new Child();
pc.myMethod(); // call's Child's myMethod because the type is checked at runtime
// rather than compile time
I hope that helps
Your are right - calls to overloaded methods are realized at compile time. That's why it is static.
Calls to overridden methods are realized at run-time, based on the type on which the method is invoked.
On virtual methods wikipedia says:
In Java, all non-static methods are by default "virtual functions." Only methods marked with the keyword final are non-virtual.
final methods cannot be overridden, so they are realized statically.
Imagine the method:
public String analyze(Interface i) {
i.analyze();
return i.getAnalysisDetails();
}
The compiler can't overload this method for all implementations of Interface that can possibly be passed to it.
I don't think you can call overloading any sort of polymorphism. Overloaded methods are linked at compile time, which kind of precludes calling it polymorphism.
Polymorphism refers to the dynamic binding of a method to its call when you use a base class reference for a derived class object. Overriding methods is how you implement this polymorphic behaviour.
i agree with rachel, because in K&B book it is directly mentioned that overloading does not belong to polymorphism in chapter 2(object orientation). But in lots of places i found that overloading means static polymorphism because it is compile time and overriding means dynamic polymorphism because it s run time.
But one interesting thing is in a C++ book (Object-Oriented Programming in C++ - Robert Lafore) it is also directly mentioned that overloading means static polymorphism.
But one more thing is there java and c++ both are two different programing languages and they have different object manipulation techniques so may be polymorphism differs in c++ and java ?
Method Overloading simply means providing two separate methods in a class with the same name but different arguments while method return type may or may not be different which allows us to reuse the same method name.
But both methods are different hence can be resolved by compiler at compile time that's is why it is also known as Compile Time Polymorphism or Static Polymorphism
Method Overriding means defining a method in the child class which is already defined in the parent class with same method signature i.e same name, arguments and return type.
Mammal mammal = new Cat();
System.out.println(mammal.speak());
At the line mammal.speak() compiler says the speak() method of reference type Mammal is getting called, so for compiler this call is Mammal.speak().
But at the execution time JVM knows clearly that mammal reference is holding the reference of object of Cat, so for JVM this call is Cat.speak().
Because method call is getting resolved at runtime by JVM that's why it is also known as Runtime Polymorphism and Dynamic Method Dispatch.
Difference Between Method Overloading and Method Overriding
For more details, you can read Everything About Method Overloading Vs Method Overriding.
Simple Definition - Method overloading deals with the notion of having two or more methods(functions) in the same class with the same name but different arguments.
While Method overriding means having two methods with the same arguments, but different implementation. One of them would exist in the Parent class (Base Class) while another will be in the derived class(Child Class).#Override annotation is required for this.
Check this :
Click here for a detailed example
Property Over-loading Overriding
Method Names -------------->must be Same----------------must be same
Arg Types------------------>must be Different(at least arg)
Method Signature
Return Type
Private,Static,Final
Access Modifier
try/Catch
Method Resolution
First, I want to discuss Run-time/Dynamic polymorphism and Compile-time/static polymorphism.
Compile-time/static polymorphism:- as its name suggests that it bind the function call to its appropriate Function at compile time. That means the compiler exactly know which function call associated to which function. Function overloading is an example of compile time polymorphism.
Run-time/Dynamic polymorphism:-In this type of polymorphism compiler don't know which functions call associates to which function until the run of the program. Eg. function overriding.
NOW, what are the function overriding and function overloading???
Function Overloading:- same function name but different function signature/parameter.
eg. Area(no. of parameter)
{ -------------
----------------
return area;}
area of square requires only one parameter
area of rectangle requires two parameters(Length and breadth)
function overriding:- alter the work of a function which is present in both the Superclass and Child class.
eg. name() in superclass prints "hello Rahul" but after overring in child class it prints "hello Akshit"
Tried to cover all differences
Overloading Overriding
Method Name Must be same Must be same
Argument Types Must be same Must be different
Return Type No restriction Must be same till 1.4V
but after 1.4V
co- variants
were introduced
private/static/final Can be overloaded Cannot be overridden
Access Modifiers No restriction Cannot reduce the scope
Throws keyword No restriction If child class method
throws a checked
exception the parent
class method must throw
the same or the
parent exception
Method Resolution Taken care by compiler Taken care by JVM based
based on reference types on run-time object
Known as Compile-Time Polymorphism, RunTime Polymorphism,
Static Polymorphism, or dynamic polymorphism,
early binding late binding.