So if I have
public void methodName() {
super.methodName();
}
How will the Compiler / JVM handle this? Will it be treated the same as if the override never happened assuming the signatures are identical? I want to put this bit of code in as a clarification of intent so that folks don't wonder why hashCode() wasn't implemented in the same class as equals()
If it makes a difference to the system though, maybe not.
Well, often the question “Can the JVM / Compiler optimize this particular method call?” is different from “Will it optimize said call?”, but your actual question is a different one.
Your real question is “Should I worry about the performance of this delegation call?” and that’s much easier to answer as it is a clear “No, don’t worry”.
First of all, regardless of whether a method invocation gets special treatment by the optimizer or not, the cost of a single invocation is negligible. It really doesn’t matter.
The reason, why optimizations of invocations are ever discussed, is not that the invocation itself is so expensive, but that inlining a method invocation enables follow-up optimizations by analyzing the caller’s code and the callee’s code as a unit. Obviously, this isn’t relevant to the trivial code of your overriding method. It only becomes relevant if the optimizer is going to take the caller’s context into account and if such an inlining operation happens, that single delegation step is indeed no match to the optimizer. The result of such an optimization will indeed be “as if the override never happened” (which applies to a lot of not so trivial scenarios as well).
But if that ever happens, depends on several surrounding conditions, including the question whether the code is a performance relevant hot spot. If not, it might happen that a call doesn’t get optimized, but that still shouldn’t bother you, because, well, it’s not performance relevant then.
Related
One reason why invoking overloaded constructors through this() can be useful is that it can prevent the unnecessary duplication of code.In many cases, reducing duplicate code decreases the time it takes to load your class because often the object code is smaller. This is especially important for programs delivered via the Internet in which load times are an issue.
However, you need to be careful. Constructors that call this() will execute a bit slower than those that contain all of their initialization code inline. This is because the call and return mechanism used when the second constructor is invoked adds overhead. If your class will be used to create only a handful of objects, or if the constructors in the class that
call this() will be seldom used, then this decrease in run-time performance is probably insignificant.
How time taken for loading of class is smaller?
and
What should be the points of trade-off between using this in constructor and using inline code ?
That is a brilliantly typical case of premature optimization. Nobody thinks about performance when eliminating duplication, they just think about deleting several code paths that essentially do the same thing while cluttering the code base and giving opportunities for divergence between these code paths.
Conclusion: don't worry about such petty things, just write good and concise code. Duplication will hurt your system a thousand times more than a method call will hurt your performance.
Computer resources being RAM, possessing power, and disk space. I am just curious, even though it is more or less by a tiny itty-bitty amount.
It could, in theory, be a hair faster in some cases. In practice, they're equally fast.
Non-static, non-public methods are invoked using the invokevirtual bytecode op. This opcode requires the JVM to dynamically look up the actual's method resolution: if you have a call that's statically compiled to AbstractList::contains, should that resolve to ArrayList::contains, or LinkedList::contains, etc? What's more, the compiler can't just reuse the result of this compilation for next time; what if the next time that myList.contains(val) gets called, it's on a different implementation? So, the compiler has to do at least some amount of checking, roughly per-invocation, for non-private methods.
Private methods can't be overridden, and they're invoked using invokespecial. This opcode is used for various kind of method calls that you can resolve just once, and then never change: constructors, call to super methods, etc. For instance, if I'm in ArrayList::add and I call super.add(value) (which doesn't happen there, but let's pretend it did), then the compiler can know for sure that this refers to AbstractList::add, since a class's super class can't ever change.
So, in very rough terms, an invokevirtual call requires resolving the method and then invoking it, while an invokespecial call doesn't require resolving the method (after the first time it's called -- you have to resolve everything at least once!).
This is covered in the JVM spec, section 5.4.3:
Resolution of the symbolic reference of one occurrence of an invokedynamic instruction does not imply that the same symbolic reference is considered resolved for any other invokedynamic instruction.
For all other instructions above, resolution of the symbolic reference of one occurrence of an instruction does imply that the same symbolic reference is considered resolved for any other non-invokedynamic instruction.
(empahsis in original)
Okay, now for the "but you won't notice the difference" part. The JVM is heavily optimized for virtual calls. It can do things like detecting that a certain site always sees an ArrayList specifically, and so "staticify" the List::add call to actually be ArrayList::add. To do this, it needs to verify that the incoming object really is the expected ArrayList, but that's very cheap; and if some earlier method call has already done that work in this method, it doesn't need to happen again. This is called a monomorphic call site: even though the code is technically polymorphic, in practice the list only has one form.
The JVM optimizes monomorphic call sites, and even bimorphic call sites (for instance, the list is always an ArrayList or a LinkedList, never anything else). Once it sees three forms, it has to use a full polymorphic dispatch, which is slower. But then again, at that point you're comparing apples to oranges: a non-private, polymorphic call to a private call that's monomorphic by definition. It's more fair to compare the two kinds of monomorphic calls (virtual and private), and in that case you'll probably find that the difference is minuscule, if it's even detectible.
I just did a quick JMH benchmark to compare (a) accessing a field directly, (b) accessing it via a public getter and (c) accessing it via a private getter. All three took the same amount of time. Of course, uber-micro benchmarks are very hard to get right, because the JIT can do such wonderful things with optimizations. Then again, that's kind of the point: The JIT does such wonderful things with optimizations that public and private methods are just as fast.
Do private functions use more or less computer resources than public ones?
No. The JVM uses the same resources regardless of the access modifier on individual fields or methods.
But, there is a far better reason to prefer private (or protected) beside resource utilization; namely encapsulation. Also, I highly recommend you read The Developer Insight Series: Part 1 - Write Dumb Code.
I am just curious, even though it is more or less by a tiny itty-bitty amount.
While it is good to be curious ... if you start taking this kind of thing into account when you are programming, then:
you are liable to waste a lot of time looking for micro-optimizations that are not needed,
your code is liable to be unmaintainable because you are sacrificing good design principles, and
you even risk making your code less efficient* than it would be if you didn't optimize.
* - It it can go like this. 1) You spend a lot of time tweaking your code to run fast on your test platform. 2) When you run on the production platform, you find that the hardware gives you different performance characteristics. 3) You upgrade the Java installation, and the new JVM's JIT compiler optimizes your code differently, or it has a bunch of new optimizations that are inhibited by your tweaks. 4) When you run your code on real-world workloads, you discover that the assumption that were the basis for your tweaking are invalid.
I found out that the C++ compiler does so but I want to know if the Java compiler does the same since in that answer they said adding static would do so but static is different in java and C++. In my case performance would matter since am using functions that are called only once per frame in a game loop and called nowhere else, to make it more readable
In my code I have it setup up similar to this, except with many more calls
while(running)
{
update();
sync();
}
and then update(), render() would call more methods that call other methods
private final void update()
{
switch(gameState)
{
case 0:
updateMainMenu();
renderMainMenu();
break;
case 1:
updateInGame();
renderInGame();
break;
//and so on
}
}
private final void updateInGame()
{
updatePlayerData();
updateDayCycle();
//and so on
}
private final void updatePlayerData()
{
updateLocation();
updateHealth();
//and so on
}
So would the compiler inline these functions since they are only used once per frame in the same location?
If this is a bad question, plz tell me and I will remove it.
A Java JITC will attempt to inline any functions that appear (based on runtime statistics) to be called often enough to merit it. It doesn't matter whether the function is called in only one place or dozens of places -- each calling site is analyzed separately.
Note that the decision is based on several factors. How big the method is is one -- if there are a lot of potential inlining candidates only the most profitable will be inlined, to avoid "code bloat". But the frequency of the call (multiplied by the perceived expense of the call) is the biggest "score" factor.
One thing that will discourage inlining is obvious polymorphic calls. If a call might be polymorphic it must be "guarded" by code that will execute the original call if the arriving class is not the expected one. If statistics prove that a call is frequently polymorphic (and including all the polymorphic variants is not worthwhile) then it's likely not sufficiently profitable to inline. A static or final method is the most attractive, since it requires no guard.
Another thing that can discourage inlining (and a lot of other stuff) is, oddly enough, a failure to return from the method. If you have a method that's entered and then loops 10 million times internally without returning, the JITC never gets a chance to "swap out" the interpreted method and "swap in" the compiled one. But JITCs overcome this to a degree by using techniques for compiling only part of a method, leaving the rest interpreted.
For future reference, you can view the bytecode of a .class file with javap -c MyClass to see what your compiled code looks like.
To answer your question: the Java compiler does not inline methods. The JVM, on the other hand, analyzes your code and will inline at runtime if necessary. Basically, you shouldn't worry about it -- leave it to the JVM, and it will inline if it finds it beneficial. The JVM is typically smarter than you when it comes to these things.
From http://www.oracle.com/technetwork/java/whitepaper-135217.html#method:
Method Inlining
The frequency of virtual method invocations in the Java programming language is an important optimization bottleneck. Once the Java HotSpot adaptive optimizer has gathered information during execution about program hot spots, it not only compiles the hot spot into native code, but also performs extensive method inlining on that code.
Inlining has important benefits. It dramatically reduces the dynamic frequency of method invocations, which saves the time needed to perform those method invocations. But even more importantly, inlining produces much larger blocks of code for the optimizer to work on. This creates a situation that significantly increases the effectiveness of traditional compiler optimizations, overcoming a major obstacle to increased Java programming language performance.
Inlining is synergistic with other code optimizations, because it makes them more effective. As the Java HotSpot compiler matures, the ability to operate on large, inlined blocks of code will open the door to a host of even more advanced optimizations in the future.
I'm a beginner and I've always read that it's bad to repeat code. However, it seems that in order to not do so, you would have to have extra method calls usually. Let's say I have the following class
public class BinarySearchTree<E extends Comparable<E>>{
private BinaryTree<E> root;
private final BinaryTree<E> EMPTY = new BinaryTree<E>();
private int count;
private Comparator<E> ordering;
public BinarySearchTree(Comparator<E> order){
ordering = order;
clear();
}
public void clear(){
root = EMPTY;
count = 0;
}
}
Would it be more optimal for me to just copy and paste the two lines in my clear() method into the constructor instead of calling the actual method? If so how much of a difference does it make? What if my constructor made 10 method calls with each one simply setting an instance variable to a value? What's the best programming practice?
Would it be more optimal for me to just copy and paste the two lines in my clear() method into the constructor instead of calling the actual method?
The compiler can perform that optimization. And so can the JVM. The terminology used by compiler writer and JVM authors is "inline expansion".
If so how much of a difference does it make?
Measure it. Often, you'll find that it makes no difference. And if you believe that this is a performance hotspot, you're looking in the wrong place; that's why you'll need to measure it.
What if my constructor made 10 method calls with each one simply setting an instance variable to a value?
Again, that depends on the generated bytecode and any runtime optimizations performed by the Java Virtual machine. If the compiler/JVM can inline the method calls, it will perform the optimization to avoid the overhead of creating new stack frames at runtime.
What's the best programming practice?
Avoiding premature optimization. The best practice is to write readable and well-designed code, and then optimize for the performance hotspots in your application.
What everyone else has said about optimization is absolutely true.
There is no reason from a performance point of view to inline the method. If it's a performance issue, the JIT in your JVM will inline it. In java, method calls are so close to free that it isn't worth thinking about it.
That being said, there's a different issue here. Namely, it is bad programming practice to call an overrideable method (i.e., one that is not final, static, or private) from the constructor. (Effective Java, 2nd Ed., p. 89 in the item titled "Design and document for inheritance or else prohibit it")
What happens if someone adds a subclass of BinarySearchTree called LoggingBinarySearchTree that overrides all public methods with code like:
public void clear(){
this.callLog.addCall("clear");
super.clear();
}
Then the LoggingBinarySearchTree will never be constructable! The issue is that this.callLog will be null when the BinarySearchTree constructor is running, but the clear that gets called is the overridden one, and you'll get a NullPointerException.
Note that Java and C++ differ here: in C++, a superclass constructor that calls a virtual method ends up calling the one defined in the superclass, not the overridden one. People switching between the two languages sometimes forget this.
Given that, I think it's probably cleaner in your case to inline the clear method when called from the constructor, but in general in Java you should go ahead and make all the method calls you want.
I would definitely leave it as is. What if you change the clear() logic? It would be impractical to find all the places where you copied the 2 lines of code.
Generally speaking (and as a beginner this means always!) you should never make micro-optimisations like the one you're considering. Always favour readability of code over things like this.
Why? Because the compiler / hotspot will make these sorts of optimisations for you on the fly, and many, many more. If anything, when you try and make optimisations along these sorts of lines (though not in this case) you'll probably make things slower. Hotspot understands common programming idioms, if you try and do that optimisation yourself it probably won't understand what you're trying to do so it won't be able to optimise it.
There's also a much greater maintenance cost. If you start repeating code then it's going to be much more effort to maintain, which will probably be a lot more hassle than you might think!
As an aside, you may get to some points in your coding life where you do need to make low level optimisations - but if you hit those points, you'll definitely, definitely know when the time comes. And if you don't, you can always go back and optimise later if you need to.
The best practice is to measure twice and cut once.
Once you've wasted time optimization, you can never get it back again! (So measure it first and ask yourself if it's worth optimisation. How much actual time will you save?)
In this case, the Java VM is probably already doing the optimization you are talking about.
The cost of a method call is the creation (and disposal) of a stack frame and some extra byte code expressions if you need to pass values to the method.
The pattern that I follow, is whether or not this method in question would satisfy one of the following:
Would it be helpful to have this method available outside this class?
Would it be helpful to have this method available in other methods?
Would it be frustrating to rewrite this every time i needed it?
Could the versatility of the method be increased with the use of a few parameters?
If any of the above are true, it should be wrapped up in it's own method.
Keep the clear() method when it helps readability. Having unmaintainable code is more expensive.
Optimizing compilers usually do a pretty good job of removing the redundancy from these "extra" operations; in many instances, the difference between "optimized" code and code simply written the way you want, and run through an optimizing compiler is none; that is to say, the optimizing compiler usually does just as good a job as you'd do, and it does it without causing any degradation of the source code. In fact, many times, "hand-optimized" code ends up being LESS efficient, because the compiler considers many things when doing the optimization. Leave your code in a readable format, and don't worry about optimization until a later time.
"Premature optimization is the root of
all evil." - Donald Knuth
I wouldn't worry about method call as much but the logic of the method. If it was critical systems, and the system needed to "be fast" then, I would look at optimising codes that takes long to execute.
Given the memory of modern computers this is very inexpensive. Its always better to break your code up into methods so someone can quickly read whats going on. It will also help with narrowing down errors in the code if the error is restricted to a single method with a body of a few lines.
As others have said, the cost of the method call is trivial-to-nada, as the compiler will optimize it for you.
That said, there are dangers in making method calls to instance methods from a constructor. You run the risk of later updating the instance method so that it may try to use an instance variable that has not been initiated yet by the constructor. That is, you don't necessarily want to separate out the construction activities from the constructor.
Another question--your clear() method sets the root to EMPTY, which is initialized when the object is created. If you then add nodes to EMPTY, and then call clear(), you won't be resetting the root node. Is this the behavior you want?
In C++, I have to explicitly specify 'virtual' keyword to make a member function 'overridable', as there involves an overhead of creating virtual tables and vpointers, when a member function is made overridable (so every member function is implicitly not overridable for performance reasons).
It also allows a member function to be hidden (if not overridden) when a subclass provides a separate implementation with the same name and signature.
The same technique is used in C# as well. I am wondering why Java waved away from this behavior and made every method overridable by default and provided the ability to disable overriding behavior on explicit use of 'final' keyword.
The better question might be "Why does C# have non-virtual methods?" Or at the very least, why aren't they virtual by default with the option to flag them as non-virtual?
In C++, there is the idea (as Brian so nicely pointed out) that if you don't want it, you don't pay for it. The problem is that if you do want it, this usually means you end up paying through the nose for it. In most Java implementations, they are designed explicitly for lots of virtual calls; the vtable implementations tend to be fast, scarcely more expensive than non-virtual calls, meaning the primary advantage of non-virtual functions is lost. Furthermore, JIT compilers can inline virtual functions at runtime. As such, for efficiency reasons, there is very little reason actually to use non-virtual functions.
Thus, it largely comes down to the principle of least surprise. It tells us that all methods to behave the same way, not half of them being virtual and half of them being non-virtual. Since we need to have at least some virtual methods to achieve this polymorphism thing, it makes sense to have them all be virtual. Furthermore, having two methods with the same signature is just asking to shoot yourself in the foot.
Polymorphism also dictates that the object itself should have control over what it does. It's behavior should not be determinate on whether the client thinks it's a FooParent or a FooChild.
EDIT: So I'm being called on my assertions. This next paragraph is conjecture on my part, not a statement of fact.
An interesting side effect of all this is that Java programmers tend to use interfaces very heavily. Since the virtual method optimizations make the cost of interfaces essentially non-existent, they allow you to use a List (for example) instead of an ArrayList, and switch it out for a LinkedList at some later date with a simple one-line change and no additional penalty.
EDIT: I'll also pony up a couple sources. While not the original sources, they do come from Sun explaining some of the workings on HotSpot.
Inlining
VTable
Taken from here (#34)
There’s no virtual keyword in Java
because all non-static methods always
use dynamic binding. In Java, the
programmer doesn’t have to decide
whether to use dynamic binding. The
reason virtual exists in C++ is so you
can leave it off for a slight increase
in efficiency when you’re tuning for
performance (or, put another way, "If
you don’t use it, you don’t pay for
it"), which often results in confusion
and unpleasant surprises. The final
keyword provides some latitude for
efficiency tuning – it tells the
compiler that this method cannot be
overridden, and thus that it may be
statically bound (and made inline,
thus using the equivalent of a C++
non-virtual call). These optimizations
are up to the compiler.
A bit circular, perhaps.
So Java's rationale is probably something like this: the whole point of an object-oriented language is that things can be extended. So in terms of pure design, it really makes little sense to treat extensible as the "special case".
Remember that Java has the luxury of compiling at runtime. So some of the performance arguments in C++ compilation go out the window. In C++, if a class might be overridden, then the compiler has to take extra steps. In Java, there's no mystery about it: at any given moment in time, the JVM knows whether or not a particular method/class has been overridden or not, and that's essentially what counts.
Note that the final keyword is essentially about program design, not optimisation. The JVM doesn't need this information to see whether or not a class/method has been overridden!!
If the question is about to ask what is the better approach between java and C++/C# then it was already discussed in opposite direction in another thread, and many resource available on the net
Why C# implements methods as non-virtual by default?
http://www.artima.com/intv/nonvirtual.html
Recent introduction of #Override annotation and its wide adoption in new code, suggest that the exact answer to the question "Why all java methods are implicitly overridable?" is indeed because the designer made a mistake. (And they already fixed it)
Oh ! I'm going to get negative vote for this.
Java tries to move closer to a more dynamic language definition, where everything is an object and everything is a virtual method. It also wants to avoid ambiguity and hard to understand constructs, which it's designers viewed as a flaw in C++, therefore no operator overloading, and in this case no ability to have two public method signatures on one class hierarchy invoking different methods depending on the type of the variable referencing it.
C# is more concerned about the stability of subclasses and making sure that the subclasses behave predictably. C++ is concerned about performance.
Three different design priorities, leading to different choices.
I would say that in Java cost of virtual method is low compared to whole VM costs. In C++ it is significant cost, compared to assembly-like C background. Nobody would decide to make all methods called through pointer by default as result of C to C++ migration. It's too big change.