Related
Preface
I have been experimenting with ByteBuddy and ASM, but I am still a beginner in ASM and between beginner and advanced in ByteBuddy. This question is about ByteBuddy and about JVM bytecode limitations in general.
Situation
I had the idea of creating global mocks for testing by instrumenting constructors in such a way that instructions like these are inserted at the beginning of each constructor:
if (GlobalMockRegistry.isMock(getClass()))
return;
FYI, the GlobalMockRegistry basically wraps a Set<Class<?>> and if that set contains a certain class, then isMock(Class<?>> clazz) would return true. The advantage of that concept is that I can (de)activate global mocking for each class during runtime because if multiple tests run in the same JVM process, one test might need a certain global mock, the next one might not.
What the if(...) return; instructions above want to achieve is that if mocking is active, the constructor should not do anything:
no this() or super() calls, → update: impossible
no field initialisations, → update: possible
no other side effects. → update: might be possible, see my update below
The result would be an object with uninitialised fields that did not create any (possibly expensive) side effects such as resource allocation (database connection, file creation, you name it). Why would I want that? Could I not just create an instance with Objenesis and be happy? Not if I want a global mock, i.e. mock objects I cannot inject because they are created somewhere inside methods or field initialisers I do not have control over. Please do not worry about what method calls on such an object would do if its instance fields are not properly initialised. Just assume I have instrumented the methods to return stub results, too. I know how to do that already, the problem are only constructors in the context of this question.
Questions / problems
Now if I try to simulate the desired result in Java source code, I meet the following limitations:
I cannot insert any code before this() or super(). I could mitigate that by also instrumenting the super class hierarchy with the same if(...) return;, but would like to know if I could in theory use ASM to insert my code before this() or super() using a method visitor. Or would the byte code of the instrumented class somehow be verified during loading or retransformation and then rejected because the byte code is "illegal"? I would like to know before I start learning ASM because I want to avoid wasting time for an idea which is not feasible.
If the class contains final instance fields, I also cannot enter a return before all of those fields have been initialised in the constructor. That might happen at the very end of a complex constructor which performs lots of side effects before actually initialising the last field. So the question is similar to the previous one: Can I use ASM to insert my if(...) return; before any fields (including final ones) are initialised and produce a valid class which I could not produce using javac and will not be rejected when loaded or retransformed?
BTW, if it is relevant, we are talking about Java 8+, i.e. at the time of writing this that would be Java versions 8 to 14.
If anything about this question is unclear, please do not hesitate to ask follow-up questions, so I can improve it.
Update after discussing Antimony's answer
I think this approach could work and avoid side effects, calling the constructor chain but avoiding any side effects and resulting in a newly initialised instance with all fields empty (null, 0, false):
In order to avoid calling this.getClass(), I need to hard-code the mock target's class name directly into all constructors up the parent chain. I.e. if two "global mock" target classes have the same parent class(es), multiple of the following if blocks would be woven into each corresponding parent class, one for each hard-coded child class name.
In order to avoid any side effects from objects being created or methods being called, I need to call a super constructor myself, using null/zero/false values for each argument. That would not matter because the next parent class up the chain would have a similar code block so that the arguments given do not matter anyway.
// Avoid accessing 'this.getClass()'
if (GlobalMockRegistry.isMock(Sub.class)) {
// Identify and call any parent class constructor, ideally a default constructor.
// If none exists, call another one using default values like null, 0, false.
// In the class derived from Object, just call 'Object.<init>'.
super(null, 0, false);
return;
}
// Here follows the original byte code, i.e. the normal super/this call and
// everything else the original constructor does.
Note to myself: Antimony's answer explains "uninitialised this" very nicely. Another related answer can be found here.
Next update after evaluating my new idea
I managed to validate my new idea with a proof of concept. As my JVM byte code knowledge is too limited and I am not used to the way of thinking it requires (stack frames, local variable tables, "reverse" logic of first pushing/popping variables, then applying an operation on them, not being able to easily debug), I just implemented it in Javassist instead of ASM, which in comparison was a breeze after failing miserably with ASM after hours of trial & error.
I can take it from here and I want to thank user Antimony for his very instructive answer + comments. I do know that theoretically the same solution could be implemented using ASM, but it would be exceedingly difficult in comparison because its API is too low level for the task at hand. ByteBuddy's API is too high level, Javassist was just right for me in order to get quick results (and easily maintainable Java code) in this case.
Yes and no. Java bytecode is much less restrictive than Java (source) in this regard. You can put any bytecode you want before the constructor call, as long as you don't actually access the uninitialized object. (The only operations allowed on an uninitialized this value are calling a constructor, setting private fields declared in the same class, and comparing it against null).
Bytecode is also more flexible in where and how you make the constructor call. For example, you can call one of two different constructors in an if statement, or you can wrap the super constructor call in a "try block", both things that are impossible at the Java language level.
Apart from not accessing the uninitialized this value, the only restriction* is that the object has to be definitely initialized along any path that returns from the constructor call. This means the only way to avoid initializing the object is to throw an exception. While being much laxer than Java itself, the rules for Java bytecode were still very deliberately constructed so it is impossible to observe uninitialized objects. In general, Java bytecode is still required to be memory safe and type safe, just with a much looser type system than Java itself. Historically, Java applets were designed to run untrusted code in the JVM, so any method of bypassing these restrictions was a security vulnerability.
* The above is talking about traditional bytecode verification, as that is what I am most familiar with. I believe stackmap verification behaves similarly though, barring implementation bugs in some versions of Java.
P.S. Technically, Java can have code execute before the constructor call. If you pass arguments to the constructor, those expressions are evaluated first, and hence the ability to place bytecode before the constructor call is required in order to compile Java code. Likewise, the ability to set private fields declared in the same class is used to set synthetic variables that arise from the compilation of nested classes.
If the class contains final instance fields, I also cannot enter a return before all of those fields have been initialised in the constructor.
This, however, is eminently possible. The only restriction is that you call some constructor or superconstructor on the uninitialized this value. (Since all constructors recursively have this restriction, this will ultimately result in java.lang.Object's constructor being called). However, the JVM doesn't care what happens after that. In particular, it only cares that the fields have some well typed value, even if it is the default value (null for objects, 0 for ints, etc.) So there is no need to execute the field initializers to give them a meaningful value.
Is there any other way to get the type to be instantiated other than this.getClass() from a super class constructor?
Not as far as I am aware. There's no special opcode for magically getting the Class associated with a given value. Foo.class is just syntactic sugar which is handled by the Java compiler.
In AOP in Java (AspectJ) when we talk about method pointcuts, we can differentiate them into two different sets: method call pointcuts and method execution pointcuts.
Basing on these resources here on SO:
execution Vs. call Join point
Difference between call and execution in AOP
And some AspectJ background, we can tell that basically the differences between the two can be expressed as the following:
Given these classes:
class CallerObject {
//...
public void someMethod() {
CompiletimeTypeObject target = new RuntimeTypeObject();
target.someMethodOfTarget();
}
//...
}
class RuntimeTypeObject extends CompileTypeObject {
#Override
public void someMethodOfTarget() {
super.someMethodOfTarget();
//...some other stuff
}
}
class CompiletimeTypeObject {
public void someMethodOfTarget() {
//...some stuff
}
}
A method call pointcut refers to the call of a method from a caller object which calls the method of a target object (the one which actually implements the method being called). In the example above, the caller is CallerObject, the target is RuntimeTypeObject. Also, a method call pointcut refers to the compile time type of the object, i.e. "CompiletimeTypeObject" in the example above;
So a method call pointcut like this:
pointcut methodCallPointcut():
call(void com.example.CompiletimeTypeObject.someMethodOfTarget())
Will match the target.someMethodOfTarget(); join point inside the CallerObject.someMethod() method as the compile type of the RuntimeTypeObject is CompiletimeTypeObject, but this method call pointcut:
pointcut methodCallPointcut():
call(void com.example.RuntimeTypeObject.someMethodOfTarget())
Will not match, as the compile time type of the object (CompiletimeTypeObject) is not a RuntimeTypeObject or a subtype of it (it is the opposite).
A method execution pointcut refers to the execution of a method (i.e. after the method has been called or right before the method call returns). It doesn't give information about the caller and more important it refers to the runtime type of the object and not to the compile time type.
So, both these method execution pointcuts will match the target.someMethodOfTarget(); execution join point:
pointcut methodCallPointcut():
execution(void com.example.CompiletimeTypeObject.someMethodOfTarget())
pointcut methodCallPointcut():
execution(void com.example.RuntimeTypeObject.someMethodOfTarget())
As the matching is based on the runtime type of the object which is RuntimeTypeObject for both and RuntimeTypeObject is both CompiletimeTypeObject (first pointcut) and a RuntimeTypeObject (second pointcut).
Now, as PHP doesn't provide compile time types for objects (unless type-hinting is used to somehow emulate this behaviour), does it make sense to differentiate method call and method execution pointcuts in a PHP AOP implementation? How then will the pointcuts differ from each other?
Thanks for the attention!
EDIT: #kriegaex has pointed out another interesting aspect between call and method execution pointcuts in AspectJ.
Thank you for the great and concise example. I have tried to make an example myself too and here is what I understood:
In case A (I use a 3rd party library), I actually can't intercept the execution of a library method because the library itself was already compiled into bytecode and any aspect concerning that library was already woven into that bytecode too (I would need to weave the sources in order to do so).
So I can only intercept the method calls to the library methods, but again I can only intercept the calls to library methods in my code and not the calls to library methods from within the library itself because of the same principle (the calls to library methods from within the library itself are also already compiled).
The same applies for System classes (same principle) as is said here (even if the reference refers to JBoss):
https://docs.jboss.org/jbossaop/docs/2.0.0.GA/docs/aspect-framework/reference/en/html/pointcuts.html
System classes cannot be used within execution expressions because it
is impossible to instrument them.
In case B (I provide a library for other users), if I actually need to intercept the usage of a method of my library either in the library itself or in the future user code which will use that method, then I need to use an execution pointcut as the aspect weaver will compile both the method execution and call pointcuts that concern my library and not the user code which will use my library methods (simply because the user code doesn't exist yet when I am writing the library), therefore using an execution pointcut will ensure that the weaving will occur inside the method execution (for a clear and intuitive example, look at the #kriegaex pseudo-code below) and not wherever the method is called within my library (i.e. at the caller side).
So I can intercept the usage (more precisely, execution) of my library method both when the method is used within my library and in the user's code.
If I had used a method call pointcut in this case, I would have intercepted only the calls made from within my library, and not the calls made in the user's code.
Anyway, still think if these considerations make sense and can be applied in the PHP world, what do you think guys?
Disclaimer: I do not speak PHP, not even a little. So my answer is rather general in nature than specific to PHP.
AFAIK, PHP is an interpreted rather than a compiled language. So the difference is not compile time vs. runtime type, but semantically rather declared vs. actual type. I imagine that a PHP-based AOP framework would not "compile" anything but rather preprocess source code, injecting extra (aspect) source code into the original files. Probably it would still be possible to differentiate declared from actual types somehow.
But there is another important factor which is also relevant to the difference between call vs execution joinpoints: The place in which the code is woven. Imagine situations in which you use libraries or provide them by yourself. The question for each given situation is which parts of the source code is under the user's control when applying aspect weaving.
Case A: You use a 3rd party library: Let us assume you cannot (or do not want to) weave aspects into the library. Then you cannot use execution for intercepting library methods, but still use call pointcuts because the calling code is under your control.
Case B: You provide a library to other users: Let us assume your library should use aspects, but the library's user does not know anything about it. Then execution pointcuts will always work because the advices are already woven into your library's methods, no matter if they are called from outside or from the library itself. But call would only work for internal calls because no aspect code was woven into the user's calling code.
Only if you control the calling as well as the called (executed) code it does not make so much difference whether you use call or execution. But wait a minute, it still makes a difference: execution is just woven in one place while call it woven into potentially many places, so the amount of code generated is smaller for execution.
Update:
Here is some pseudo code, as requested:
Let us assume we have a class MyClass which is to be aspect-enhanced (via source code insertion):
class MyClass {
method foo() {
print("foo");
bar();
}
method bar() {
print("bar");
zot();
}
method zot() {
print("zot");
}
static method main() {
new McClass().foo();
}
}
Now if we apply a CallAspect like this using call()
aspect CallAspect {
before() : call(* *(..)) {
print("before " + thisJoinPoint);
}
}
upon our code, it would look like this after source code weaving:
class MyClass {
method foo() {
print("foo");
print("before call(MyClass.bar())");
bar();
}
method bar() {
print("bar");
print("before call(MyClass.zot())");
zot();
}
method zot() {
print("zot");
}
static method main() {
print("before call(MyClass.foo())");
new McClass().foo();
}
}
Alternatively, if we apply an ExecutionAspect like this using execution()
aspect ExecutionAspect {
before() : execution(* *(..)) {
print("before " + thisJoinPoint);
}
}
upon our code, it would look like this after source code weaving:
class MyClass {
method foo() {
print("before execution(MyClass.foo())");
print("foo");
bar();
}
method bar() {
print("before execution(MyClass.bar())");
print("bar");
zot();
}
method zot() {
print("before execution(MyClass.zot())");
print("zot");
}
static method main() {
print("before execution(MyClass.main())");
new McClass().foo();
}
}
Can you see the difference now? Pay attention to where the code is woven into and what the print statements say.
PHP is dynamic language, so it's quite hard to implement call joinpoints because there are many languages features like call_user_func_array(), $func = 'var_dump'; $func($func);
#kriegaex wrote a good answer with main differences between call and execution types of joinpoints. Applying to the PHP, only possible joinpoint for now is an execution joinpoint, because it's much easier to hook execution of method|function by wrapping a class with decorator or by providing PHP extension for that.
Actually, Go! AOP framework provides only execution joinpoint, as well, as FLOW3 framework and others.
I need to access a private method from another class. I have two ways of accessing it. The first is the obvious reflection. The second is sort of a hack. The private method I need to call is being called from a protected inner class's accessPrivateMethod method. This method will literally only call the private method I need. So, is it better to access it using reflection or is it better to sort of "hack" it by extending the protected inner class that calls it. See code:
method = object.getClass().getDeclaredMethod("privateMethod");
method.setAccessible(true);
Object r = method.invoke(object);
Or:
(ProtectedInnerClass is a protected inner class in the class whose private method I want to access.)
class Hack extends ProtectedInnerClass {
public void accessPrivateMethod() {
// callPrivateMethod literally only calls the private method
// I need to call.
super.callPrivateMethod();
}
}
...
Hack.accessPrivateMethod();
Some additional thoughts:
1) I've seen many people on here say to use reflection only as a last resort.
2) Reflection could cause Security issues? (SecurityManager can deny the setAccessible sometimes?) This needs to work all the time on any machine/setup.
If my hack isn't clear please say so and I will try to elaborate more. Thanks!
PS: the private method I need to access is in the JUNG libraries. Calling it fixes a bug. AKA I'm trying to find a workaround without having to edit any of the JUNG jars.
1) I've seen many people on here say to use reflection only as a last resort.
Assuming your hack actually works, it is better to use that, rather than using reflection. This is because using reflection is way more expensive.
Here's an extract on Java's API concerning reflection:
Because reflection involves types that are dynamically resolved, certain Java virtual machine optimizations can not be performed. Consequently, reflective operations have slower performance than their non-reflective counterparts, and should be avoided in sections of code which are called frequently in performance-sensitive applications.
2) Reflection could cause Security issues? (SecurityManager can deny the setAccessible sometimes?) This needs to work all the time on any machine/setup.
Likewise:
Reflection requires a runtime permission which may not be present when running under a security manager. This is in an important consideration for code which has to run in a restricted security context, such as in an Applet.
So, not only the setAccessible method may be denied, but the reflection usage overall.
Another consideration is that in order to call your Hack class method without instantiation, you need to set the inner method as static.
class Hack extends ProtectedInnerClass {
public static void accessPrivateMethod() {
super.callPrivateMethod();
}
}
Hack.accessPrivateMethod();
The fact that this question rises is probably caused by bad design or the fact that Java doesn't allow "sub-package" visibility. However, if performance is a concern, go for the "little" hack. Otherwise, choose the esthetic solution using reflections. But in first place, try to find out if your design is good.
The simple answer is of course to include a start method on the Service interface.
interface Service {
void start();
OperationResult operation( parameters );
...
}
This of course sux because most users of the Service don't want or care about starting the service they just want to use methods like operation.
How would you solve this problem? I have a simple solution which does have one major limitation without polluting the Service interface thus I would like to hear of peoples proposals.
If it's necessary to start a service, then do stuff with it, and then perhaps terminate the service, there should be an object whose function is simply to start the service and supply an object which may then be used to do stuff with the service, including terminate it (the latter action--probably best handled via IDisposable--should invalidate the "do-stuff" object).
A few ways:
start it whenever it gets constructed
throw an exception from operation methods if the service isn't started, indicating improper usage
automatically start it from every method, if it isn't started
Your question arises because you are mixing implementation issues (start() method, e.g.) with the actions (operation() method). Start is an implementation issue because you could create an instance for each caller or you can have a singleton (as in cached instances). The caller should not have to call the method at all. In fact, if you keep the start method and change your implementation to a singleton tomorrow, the code may stop working for the existing clients.
IMO, you should get rid of the start method from this interface and let the caller worry about delegating the task that your operation method does best.
If you push the start method (an optimization step that has nothing to do with the interface) to your implementation, you can solve the problem in any number of ways. For example,
a. call start() if it has not been called before in operation() method. You will need to deal with synchronization issues.
b. call start() in the constructor of your implementation object and be done with it.
etc.
If you don't want the consumer to care about calling a startup. You can consider delegate your startup code internally to a protected method that lazy initializes what you are currently doing this on startup. Such as:
protected MyService getMyService() {
if(myService == null) {
myService = new MyServiceImpl();
myService.startup();
}
return idpPersistence;
}
Call methods like this:
public String findByThis(String tag, String key) {
return getMyService().findThat(MyClass.class, column, key);
}
This of course has some tradeoffs. If you service is expensive on startup, then the first caller well take that hit first.
Another option is to implement these using a static{} block but that's of course sometimes not very testable as well. Also, doing your startup routines on object consturction, which sometimes violates IOC patterns as well. I went with adding a Service interface because the clients I had were internal, and I wanted all services initialized and ready on startup.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
What are the best practices for using Java's #Override annotation and why?
It seems like it would be overkill to mark every single overridden method with the #Override annotation. Are there certain programming situations that call for using the #Override and others that should never use the #Override?
Use it every time you override a method for two benefits. Do it so that you can take advantage of the compiler checking to make sure you actually are overriding a method when you think you are. This way, if you make a common mistake of misspelling a method name or not correctly matching the parameters, you will be warned that you method does not actually override as you think it does. Secondly, it makes your code easier to understand because it is more obvious when methods are overwritten.
Additionally, in Java 1.6 you can use it to mark when a method implements an interface for the same benefits. I think it would be better to have a separate annotation (like #Implements), but it's better than nothing.
I think it is most useful as a compile-time reminder that the intention of the method is to override a parent method. As an example:
protected boolean displaySensitiveInformation() {
return false;
}
You will often see something like the above method that overrides a method in the base class. This is an important implementation detail of this class -- we don't want sensitive information to be displayed.
Suppose this method is changed in the parent class to
protected boolean displaySensitiveInformation(Context context) {
return true;
}
This change will not cause any compile time errors or warnings - but it completely changes the intended behavior of the subclass.
To answer your question: you should use the #Override annotation if the lack of a method with the same signature in a superclass is indicative of a bug.
There are many good answers here, so let me offer another way to look at it...
There is no overkill when you are coding. It doesn't cost you anything to type #override, but the savings can be immense if you misspelled a method name or got the signature slightly wrong.
Think about it this way: In the time you navigated here and typed this post, you pretty much used more time than you will spend typing #override for the rest of your life; but one error it prevents can save you hours.
Java does all it can to make sure you didn't make any mistakes at edit/compile time, this is a virtually free way to solve an entire class of mistakes that aren't preventable in any other way outside of comprehensive testing.
Could you come up with a better mechanism in Java to ensure that when the user intended to override a method, he actually did?
Another neat effect is that if you don't provide the annotation it will warn you at compile time that you accidentally overrode a parent method--something that could be significant if you didn't intend to do it.
I always use the tag. It is a simple compile-time flag to catch little mistakes that I might make.
It will catch things like tostring() instead of toString()
The little things help in large projects.
Using the #Override annotation acts as a compile-time safeguard against a common programming mistake. It will throw a compilation error if you have the annotation on a method you're not actually overriding the superclass method.
The most common case where this is useful is when you are changing a method in the base class to have a different parameter list. A method in a subclass that used to override the superclass method will no longer do so due the changed method signature. This can sometimes cause strange and unexpected behavior, especially when dealing with complex inheritance structures. The #Override annotation safeguards against this.
To take advantage from compiler checking you should always use Override annotation. But don’t forget that Java Compiler 1.5 will not allow this annotation when overriding interface methods. You just can use it to override class methods (abstract, or not).
Some IDEs, as Eclipse, even configured with Java 1.6 runtime or higher, they maintain compliance with Java 1.5 and don’t allow the use #override as described above. To avoid that behaviour you must go to: Project Properties ->Java Compiler -> Check “Enable Project Specific Settings” -> Choose “Compiler Compliance Level” = 6.0, or higher.
I like to use this annotation every time I am overriding a method independently, if the base is an interface, or class.
This helps you avoiding some typical errors, as when you are thinking that you are overriding an event handler and then you see nothing happening. Imagine you want to add an event listener to some UI component:
someUIComponent.addMouseListener(new MouseAdapter(){
public void mouseEntered() {
...do something...
}
});
The above code compiles and run, but if you move the mouse inside someUIComponent the “do something” code will note run, because actually you are not overriding the base method mouseEntered(MouseEvent ev). You just create a new parameter-less method mouseEntered(). Instead of that code, if you have used the #Override annotation you have seen a compile error and you have not been wasting time thinking why your event handler was not running.
#Override on interface implementation is inconsistent since there is no such thing as "overriding an interface" in java.
#Override on interface implementation is useless since in practise it catches no bugs that the compilation wouldn't catch anyway.
There is only one, far fetched scenario where override on implementers actually does something: If you implement an interface, and the interface REMOVES methods, you will be notified on compile time that you should remove the unused implementations. Notice that if the new version of the interface has NEW or CHANGED methods you'll obviously get a compile error anyways as you're not implementing the new stuff.
#Override on interface implementers should never have been permitted in 1.6, and with eclipse sadly choosing to auto-insert the annotations as default behavior, we get a lot of cluttered source files. When reading 1.6 code, you cannot see from the #Override annotation if a method actually overrides a method in the superclass or just implements an interface.
Using #Override when actually overriding a method in a superclass is fine.
Its best to use it for every method intended as an override, and Java 6+, every method intended as an implementation of an interface.
First, it catches misspellings like "hashcode()" instead of "hashCode()" at compile-time. It can be baffling to debug why the result of your method doesn't seem to match your code when the real cause is that your code is never invoked.
Also, if a superclass changes a method signature, overrides of the older signature can be "orphaned", left behind as confusing dead code. The #Override annotation will help you identify these orphans so that they can be modified to match the new signature.
If you find yourself overriding (non-abstract) methods very often, you probably want to take a look at your design. It is very useful when the compiler would not otherwise catch the error. For instance trying to override initValue() in ThreadLocal, which I have done.
Using #Override when implementing interface methods (1.6+ feature) seems a bit overkill for me. If you have loads of methods some of which override and some don't, that probably bad design again (and your editor will probably show which is which if you don't know).
#Override on interfaces actually are helpful, because you will get warnings if you change the interface.
Another thing it does is it makes it more obvious when reading the code that it is changing the behavior of the parent class. Than can help in debugging.
Also, in Joshua Block's book Effective Java (2nd edition), item 36 gives more details on the benefits of the annotation.
It makes absolutely no sense to use #Override when implementing an interface method. There's no advantage to using it in that case--the compiler will already catch your mistake, so it's just unnecessary clutter.
Whenever a method overrides another method, or a method implements a signature in an interface.
The #Override annotation assures you that you did in fact override something. Without the annotation you risk a misspelling or a difference in parameter types and number.
I use it every time. It's more information that I can use to quickly figure out what is going on when I revisit the code in a year and I've forgotten what I was thinking the first time.
The best practive is to always use it (or have the IDE fill them for you)
#Override usefulness is to detect changes in parent classes which has not been reported down the hierarchy.
Without it, you can change a method signature and forget to alter its overrides, with #Override, the compiler will catch it for you.
That kind of safety net is always good to have.
I use it everywhere.
On the topic of the effort for marking methods, I let Eclipse do it for me so, it's no additional effort.
I'm religious about continuous refactoring.... so, I'll use every little thing to make it go more smoothly.
Used only on method declarations.
Indicates that the annotated method
declaration overrides a declaration
in supertype.
If used consistently, it protects you from a large class of nefarious bugs.
Use #Override annotation to avoid these bugs:
(Spot the bug in the following code:)
public class Bigram {
private final char first;
private final char second;
public Bigram(char first, char second) {
this.first = first;
this.second = second;
}
public boolean equals(Bigram b) {
return b.first == first && b.second == second;
}
public int hashCode() {
return 31 * first + second;
}
public static void main(String[] args) {
Set<Bigram> s = new HashSet<Bigram>();
for (int i = 0; i < 10; i++)
for (char ch = 'a'; ch <= 'z'; ch++)
s.add(new Bigram(ch, ch));
System.out.println(s.size());
}
}
source: Effective Java
Be careful when you use Override, because you can't do reverse engineer in starUML afterwards; make the uml first.
It seems that the wisdom here is changing. Today I installed IntelliJ IDEA 9 and noticed that its "missing #Override inspection" now catches not just implemented abstract methods, but implemented interface methods as well. In my employer's code base and in my own projects, I've long had the habit to only use #Override for the former -- implemented abstract methods. However, rethinking the habit, the merit of using the annotations in both cases becomes clear. Despite being more verbose, it does protect against the fragile base class problem (not as grave as C++-related examples) where the interface method name changes, orphaning the would-be implementing method in a derived class.
Of course, this scenario is mostly hyperbole; the derived class would no longer compile, now lacking an implementation of the renamed interface method, and today one would likely use a Rename Method refactoring operation to address the entire code base en masse.
Given that IDEA's inspection is not configurable to ignore implemented interface methods, today I'll change both my habit and my team's code review criteria.
The annotation #Override is used for helping to check whether the developer what to override the correct method in the parent class or interface. When the name of super's methods changing, the compiler can notify that case, which is only for keep consistency with the super and the subclass.
BTW, if we didn't announce the annotation #Override in the subclass, but we do override some methods of the super, then the function can work as that one with the #Override. But this method can not notify the developer when the super's method was changed. Because it did not know the developer's purpose -- override super's method or define a new method?
So when we want to override that method to make use of the Polymorphism, we have better to add #Override above the method.
I use it as much as can to identify when a method is being overriden. If you look at the Scala programming language, they also have an override keyword. I find it useful.
It does allow you (well, the compiler) to catch when you've used the wrong spelling on a method name you are overriding.
Override annotation is used to take advantage of the compiler, for checking whether you actually are overriding a method from parent class. It is used to notify if you make any mistake like mistake of misspelling a method name, mistake of not correctly matching the parameters
i think it's best to code the #override whenever allowed. it helps for coding. however, to be noted, for ecipse Helios, either sdk 5 or 6, the #override annotation for implemented interface methods is allowed. as for Galileo, either 5 or 6, #override annotation is not allowed.
Annotations do provide meta data about the code to the Compiler and the annotation #Override is used in case of inheritance when we are overriding any method of base class. It just tells the compiler that you are overriding method. It can avoide some kinds common mistakes we can do like not following the proper signature of the method or mispelling in name of the method etc. So its a good practice to use #Override annotation.
For me the #Override ensures me I have the signature of the method correct. If I put in the annotation and the method is not correctly spelled, then the compiler complains letting me know something is wrong.
Simple–when you want to override a method present in your superclass, use #Override annotation to make a correct override. The compiler will warn you if you don't override it correctly.