As I understand there is no clear rule to determine whether Java method will be JITed or interpreted while call.
So is there somehow to tell JVM that I need a certain method to be JITed. And is there a way to know for sure which method will be JITed and which not.
As far as I know you don't know (from inside the JVM) and can not enforce a method being JITed or not, but using -XX:+PrintCompilation JVM argument you can watch the JIT compiler doing it's work and check if a method gets JITed in that particular run of the program or not.
So is there somehow to tell JVM that I need a certain method to be JITed.
No, which methods are "JITed" and not is not up to you, and in fact, there is no guarantee that any method will ever be JITed. I suggest you leave these decisions to the JVM.
And is there a way to know for sure
which method will be JITed and which
not.
The (Oracle) Sun JVM is called HotSpot, which means it looks at which methods are called most, thus becoming "hot", and those methods are the first to be compiled. So some methods may never be compiled. But if you know the method is called a lot, it most probably be compiled. You can set the threshold with -XX:CompileThreshold=10000 VM options, which specifies how many invocations it takes to consider the method "hot".
I don't know any way to check whether current code is running in interpreted or compiled mode. VM crash logs show which methods in the stack trace are interpreted or compiled, maybe there's some way to get it in runtime.
There is a way to ask the jvm to compile a class, it is not guaranteed to do anything but should work on any jvm having a jit.
Compiler.compileClass(MyClass.class);
You can't tell and there is no way that it makes any difference barring bugs in HotSpot. The term 'JIT' is at least ten years out of date.
Related
I have been pulled into a performance investigation of a code that is similar to the below:
private void someMethod(String id) {
boolean isHidden = someList.contains(id);
boolean isDisabled = this.checkIfDisabled(id);
if (isHidden && isDisabled) {
// Do something here
}
}
When I started investigating it, I was hoping that the compiled version would look like this:
private void someMethod(String id) {
if (someList.contains(id) && this.checkIfDisabled(id)) {
// Do something here
}
}
However, to my surprise, the compiled version looks exactly like the first one, with the local variables, which causes the method in isDisabled to always be called, and that's where the performance problem is in.
My solution was to inline it myself, so the method now short circuits at isHidden, but it left me wondering: Why isn't the Java Compiler smart enough in this case to inline these calls for me? Does it really need to have the local variables in place?
Thank you :)
First: the java compiler (javac) does almost no optimizations, that job is almost entirely done by the JVM itself at runtime.
Second: optimizations like that can only be done when there is no observable difference in behaviour of the optimized code vs. the un-optimized code.
Since we don't know (and the compiler presumably also doesn't know) if checkIfDisabled has any observable side-effects, it has to assume that it might. Therefore even when the return value of that method is known to not be needed, the call to the method can't be optimized away.
There is, however an option for this kind of optimization to be done at runtime: If the body (or bodies, due to polymorphism) of the checkIfDisabled method is simple enough then it's quite possible that the runtime can actually optimize away that code, if it recognizes that the calls never have a side-effect (but I don't know if any JVM actually does this specific kind of optimization).
But that optimization is only possible at a point where there is definite information about what checkIfDisabled does. And due to the dynamic class-loading nature of Java that basically means it's almost never during compile time.
Generally speaking, while some minor optimizations could possibly be done during compile time, the range of possible optimizations is much larger at runtime (due to the much increased amount of information about the code available), so the Java designers decided to put basically all optimization effort into the runtime part of the system.
The most-obvious solution to this problem is simply to rewrite the code something like this:
if (someList.contains(id)) {
if (this.checkIfDisabled(id)) {
// do something here
If, in your human estimation of the problem, one test is likely to mean that the other test does not need to be performed at all, then simply "write it that way."
Java compiler optimizations are tricky. Most optimizations are done at runtime by the JIT compiler. There are several levels of optimizations, the maximum number of optimizations by default will be made after 5000 method invocations. But it is rather problematic to see which optimizations are applied, since JIT compile the code directly into the platform's native code
Computer resources being RAM, possessing power, and disk space. I am just curious, even though it is more or less by a tiny itty-bitty amount.
It could, in theory, be a hair faster in some cases. In practice, they're equally fast.
Non-static, non-public methods are invoked using the invokevirtual bytecode op. This opcode requires the JVM to dynamically look up the actual's method resolution: if you have a call that's statically compiled to AbstractList::contains, should that resolve to ArrayList::contains, or LinkedList::contains, etc? What's more, the compiler can't just reuse the result of this compilation for next time; what if the next time that myList.contains(val) gets called, it's on a different implementation? So, the compiler has to do at least some amount of checking, roughly per-invocation, for non-private methods.
Private methods can't be overridden, and they're invoked using invokespecial. This opcode is used for various kind of method calls that you can resolve just once, and then never change: constructors, call to super methods, etc. For instance, if I'm in ArrayList::add and I call super.add(value) (which doesn't happen there, but let's pretend it did), then the compiler can know for sure that this refers to AbstractList::add, since a class's super class can't ever change.
So, in very rough terms, an invokevirtual call requires resolving the method and then invoking it, while an invokespecial call doesn't require resolving the method (after the first time it's called -- you have to resolve everything at least once!).
This is covered in the JVM spec, section 5.4.3:
Resolution of the symbolic reference of one occurrence of an invokedynamic instruction does not imply that the same symbolic reference is considered resolved for any other invokedynamic instruction.
For all other instructions above, resolution of the symbolic reference of one occurrence of an instruction does imply that the same symbolic reference is considered resolved for any other non-invokedynamic instruction.
(empahsis in original)
Okay, now for the "but you won't notice the difference" part. The JVM is heavily optimized for virtual calls. It can do things like detecting that a certain site always sees an ArrayList specifically, and so "staticify" the List::add call to actually be ArrayList::add. To do this, it needs to verify that the incoming object really is the expected ArrayList, but that's very cheap; and if some earlier method call has already done that work in this method, it doesn't need to happen again. This is called a monomorphic call site: even though the code is technically polymorphic, in practice the list only has one form.
The JVM optimizes monomorphic call sites, and even bimorphic call sites (for instance, the list is always an ArrayList or a LinkedList, never anything else). Once it sees three forms, it has to use a full polymorphic dispatch, which is slower. But then again, at that point you're comparing apples to oranges: a non-private, polymorphic call to a private call that's monomorphic by definition. It's more fair to compare the two kinds of monomorphic calls (virtual and private), and in that case you'll probably find that the difference is minuscule, if it's even detectible.
I just did a quick JMH benchmark to compare (a) accessing a field directly, (b) accessing it via a public getter and (c) accessing it via a private getter. All three took the same amount of time. Of course, uber-micro benchmarks are very hard to get right, because the JIT can do such wonderful things with optimizations. Then again, that's kind of the point: The JIT does such wonderful things with optimizations that public and private methods are just as fast.
Do private functions use more or less computer resources than public ones?
No. The JVM uses the same resources regardless of the access modifier on individual fields or methods.
But, there is a far better reason to prefer private (or protected) beside resource utilization; namely encapsulation. Also, I highly recommend you read The Developer Insight Series: Part 1 - Write Dumb Code.
I am just curious, even though it is more or less by a tiny itty-bitty amount.
While it is good to be curious ... if you start taking this kind of thing into account when you are programming, then:
you are liable to waste a lot of time looking for micro-optimizations that are not needed,
your code is liable to be unmaintainable because you are sacrificing good design principles, and
you even risk making your code less efficient* than it would be if you didn't optimize.
* - It it can go like this. 1) You spend a lot of time tweaking your code to run fast on your test platform. 2) When you run on the production platform, you find that the hardware gives you different performance characteristics. 3) You upgrade the Java installation, and the new JVM's JIT compiler optimizes your code differently, or it has a bunch of new optimizations that are inhibited by your tweaks. 4) When you run your code on real-world workloads, you discover that the assumption that were the basis for your tweaking are invalid.
So if I have
public void methodName() {
super.methodName();
}
How will the Compiler / JVM handle this? Will it be treated the same as if the override never happened assuming the signatures are identical? I want to put this bit of code in as a clarification of intent so that folks don't wonder why hashCode() wasn't implemented in the same class as equals()
If it makes a difference to the system though, maybe not.
Well, often the question “Can the JVM / Compiler optimize this particular method call?” is different from “Will it optimize said call?”, but your actual question is a different one.
Your real question is “Should I worry about the performance of this delegation call?” and that’s much easier to answer as it is a clear “No, don’t worry”.
First of all, regardless of whether a method invocation gets special treatment by the optimizer or not, the cost of a single invocation is negligible. It really doesn’t matter.
The reason, why optimizations of invocations are ever discussed, is not that the invocation itself is so expensive, but that inlining a method invocation enables follow-up optimizations by analyzing the caller’s code and the callee’s code as a unit. Obviously, this isn’t relevant to the trivial code of your overriding method. It only becomes relevant if the optimizer is going to take the caller’s context into account and if such an inlining operation happens, that single delegation step is indeed no match to the optimizer. The result of such an optimization will indeed be “as if the override never happened” (which applies to a lot of not so trivial scenarios as well).
But if that ever happens, depends on several surrounding conditions, including the question whether the code is a performance relevant hot spot. If not, it might happen that a call doesn’t get optimized, but that still shouldn’t bother you, because, well, it’s not performance relevant then.
When does Java JIT inline a method call? Is it based on #times the caller method is called (if yes, what would that number be?), or some other criteria (and what would that be?)
I've read that JIT can inline 'final' methods, but it also inlines nonfinal methods based on runtime statistics, so want to know what is that triggering criteria.
I guess the answers would differ based on JVM implementation, but maybe there's something common across all of them?
The short answer is whenever it wants.
Very often a JITC will inline small final or pseudo-final methods automatically, without first gathering any stats. This is because it's easy to see that the inlining actually saves code bytes vs coding the call (or at least that it's nearly a "wash").
Inlining truly non-final methods is not usually done unless stats suggest it's worthwhile, since inlined non-finals must be "guarded" somehow in case an unexpected subclass comes through.
As to the number of times something may be called before it's JITCed or inlined, that's highly variable, and is likely to vary even within a running JVM.
the default inline threshold for a JVM running the server Hotspot compiler is 35 bytecodes.
Official docs
Typically JIT only inlines "small" methods by default. Other than that it's completely dependent on the implementation.
For testing purposes I need to be sure that certain methods are not inlined when the respective code is compiled to produce the .class files.
How do I do it in Eclipse?
EDIT
For those who really need to know why, before they tell how, here is the explanation - I am testing a code, which examines and manipulates JVM byte code. This is why I want sometimes to avoid method inlining.
You don't; you have very little control over how the compiler and JIT optimize bytecode.
It's not clear to me why you'd want to do this, though.
Note that various JVM implementations may allow tweaking, e.q., -XX:MaxInlineSize= in HotSpot might be set to an impossibly-low number meaning no methods would be inlined. There may be an equivalent option in the Eclipse compiler, but I'd be wary.
Java methods are never inlined when producing the .class files (only by an optimizing JVM at run time), so you have nothing to worry about.
When it comes to Java inlining functions, you are entirely at the whim of the compiler. You have no say in it whatsoever.
There are some various metrics that it uses to determine if something should be inlined. I believe one of these is the number of bytecode instructions in the method. So if you had a method like this:
void foo() {
if(SOME_GLOBAL_BOOLEAN_THATS_ALWAYS_FALSE) {
// lots of statements here
}
// code here
}
You might be able to reduce the chance of it inlining on you, provided you were clever enough in your if statement to make sure it wasn't going to optimize it out on you.