Recently I bumped into a situation where a static code analysis tool (PMD) complainted about a switch statement that had too few branches. It suggested turning it into an if statement, that I did not wanted to do because I knew that soon more cases will be added. But I wondered if the javac performs such an optimization or not. I decompiled the code using JAD but it showed still a switch. Is it possible that this is optimized runtime by the JIT?
Update: Please do not be misleaded by the context of my question. I'm not asking about PMD, I'm not asking about the need for micro-optimisation, etc. The question is clearly only this: does the current (Oracle 1.6.x) JVM implementation contain a JIT that deals with switches with too few branches or not.
The way to determine how the JIT compiler is optimizing switch statements is either:
read the JIT compiler source code (OpenJDK 6 and 7 are open source), or
run the JVM with the switch that tells it to dump the JIT compiled code for the classes of interest to a file.
Note that like all questions related to performance and optimization, the answer depends on the hardware platform and the JVM vendor and version.
Reference: Disassemble Java JIT compiled native bytecode
If this Question is "mere idle curiosity", so be it.
However, it should also be pointed out that rewriting your code to use switch or if for performance reasons is probably a bad idea and/or a waste of time.
It is probably a waste of time because the chances are that the difference in time (if any) between the original and hand optimized versions will be insignificant.
It is a bad idea because, your optimization may only be helpful for specific hardware and JVM combinations. On others, it may have no effect ... or even be an anti-optimization.
In short, even if you know how the JIT optimizer handles this, you probably shouldn't be taking it into account in your programming.
(The exception of course is when you have a real measurable performance problem, and profiling points to (say) a 3-branch switch as being one of the bottlenecks.)
If you compiled it in debug mode, it is normal that when you decompile it, you still get the switch. Otherwise, any debugging attempt would miss some information such as line number and the original instruction flow.
You could thus try to compile in production mode and see what the decompilation result would be.
However, a switch statement, especially if it is expected to grow, is generally considered as a code smell and should be evaluated as a good candidate for a refactoring.
As for after your clarification on what the question is.
Since this denepnds so strongly on the hardware and the JVM (JVMs using the Java trademark may be developed by companies other than Oracle as long as they adhere to the JVM specification) Id say the only valid method would be to make speed tests.
Cut out a chunk of code, lock it in a loop for a considerable amount of repetitions, check the time before and after execution of the loop. Repeat for both solutions (switch and if)
This may seem simplistic and silly, but it actually works, and is a lot faster than decompiling, reading through bytecode and memory dumps etc.
You have to remember, that Java actually uses Virtual Machines and bytecode. Im pretty sure this is all handled and optimized. We are using high level languages to AVOID such micromanagement and optimization that youre asking about
On a more general note, I think you are trying to optimize a bit too early. If you know there are going to be more cases in that switch, why bother at all? Did you run a profiler? If not, its no use optimizing. "Premature optimization is the root of all evil". You might be optimizing a part of code that actually isnt the bottleneck, incresing code complexity and wasting your own time on writing code that does not contribute in any way.
I dont know what type of app you are making, but a rule of thumb says that clarity is king, and you usually should choose simpler, more elegant, self-documenting solution.
The javac performance almost no optimisations. All the optimisations are performed at runtime using the JIT. Unless you know you have a performance problem, I would assume you don't.
What the PMD is complaining about is clarity. e.g.
if (a == 5) {
// something
} else {
// something else
}
is clearer than
switch(a) {
case 5:
// something
break;
default:
// something else
break;
}
Related
Do longer method names have an increased latency when called? Is this a noticeable effect after a long period of calls?
No, method names have nothing to do with performance. And btw, the JVM doesn't use name the name to invoke the method, it uses it symbolic link, which point to the method name in the constant pool.
invokestatic #2 //Method fn:()V
So even the byte-code doesn't get bloated with lengthy method names.
No. The length of the method name doesn't make a difference.
The fact that you're using a method does, though. More the methods, more the overhead. That's the price we pay for cleaner and more maintainable code.
Just for the sake of it: be careful when diving into "performance" from this "microscopic" point of view.
Of course it is a good thing to understand the effects of using different approaches in your source code (to avoid "stupid mistakes").
But efficiency and performance is something that receives "too much priority" too often. Typically, when you focus on creating a clean, maintainable design, the outcome will have "good enough performance". If not; well a clean, maintainable design is easier to change than a messed-up design anyway.
Keep in mind:
"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil." and I agree with this. Its usually not worth spending a lot of time micro-optimizing code before its obvious where the performance bottlenecks are. But, conversely, when designing software at a system level, performance issues should always be considered from the beginning. A good software developer will do this automatically, having developed a feel for where performance issues will cause problems. An inexperienced developer will not bother, misguidedly believing that a bit of fine tuning at a later stage will fix any problems."
What Hoare and Knuth are really saying is that software engineers should worry about other issues (such as good algorithm design and good implementations of those algorithms) before they worry about micro-optimizations such as how many CPU cycles a particular statement consumes.
from here
When conducting performance testing of java code, you want to test JIT compiled code, rather than raw bytecode. To cause the bytecode to be compiled, you must trigger the compilation to occur by executing the code a number of times, and also allow enough yield time for the background thread to complete the compilation.
What is the minimum number of "warm up" executions of a code path required to be "very confident" that the code will be JIT compiled?
What is the minimum sleep time of the main thread to be "very confident" that the compilation has completed (assuming a smallish code block)?
I'm looking for a threshold that would safely apply in any modern OS, say Mac OS or Windows for the development environment and Linux for CI/production.
Since OP's intent is not actually figuring out whether the block is JIT-compiled, but rather make sure to measure the optimized code, I think OP needs to watch some of these benchmarking talks.
TL;DR version: There is no reliable way to figure out whether you hit the "steady state":
You can only measure for a long time to get the ball-park estimate for usual time your concrete system takes to reach some state you can claim "steady".
Observing -XX:+PrintCompilation is not reliable, because you may be in the phase when counters are still in flux, and JIT is standing by to compile the next batch of now-hot methods. You can easily have a number of warmup plateaus because of that. The method can even recompile multiple times, depending on how many tiered compilers are standing by.
While one could argue about the invocation thresholds, these things are not reliable either, since tiered compilation may be involved, method might get inlined sooner in the caller, the probabilistic counters can miss the updates, etc. That is, common wisdom about -XX:CompileThreshold=# is unreliable.
JIT compilation is not the only warmup effect you are after. Automatic GC heuristics, scheduler heuristics, etc. also require warmup.
Get a microbenchmark harness which will make the task easier for you!
To begin with, the results will most likely differ for a JVM run in client or in server mode. Second of all, this number depends highly on the complexity of your code and I am afraid that you will have to exploratively estimate a number for each test case. In general, the more complex your byte code is, the more optimization can be applied to it and therefore your code must get relatively hotter in order to make the JVM reach deep into its tool box. A JVM might recompile a segment of code a dozen of times.
Furthermore, a "real world" compilation depends on the context in which your byte code is run. A compilation might for example occur when a monomorphic call site is promoted to a megamorphic one such that an observed compilation actually represents a de-optimization. Therefore, be careful when assuming that your micro benachmark reflects the code's actual performance.
Instead of the suggested flag, I suggest you to use CompilationMXBean which allows you to check the amount of time a JVM still spends with compilation. If that time is too high, rerun your test until the value gets stable long enough. (Be patient!) Frameworks can help you with creating good benchmarks. Personally, I like caliper. However, never trust your benchmark.
From my experience, custom byte code performs best when you immitate the idioms of javac. To mention the one anecdote I can tell on this matter, I once wrote custom byte code for the Java source code equivalent of:
int[] array = {1, 2, 3};
javac creates the array and uses dup for assigning each value but I stored the array reference in a local variable and loaded it back on the operand stack for assigning each value. The array had a bigger size than that and there was a noteable performance difference.
Finally, I recommend this article before writing a benchmark.
Not sure about numbers, but when doing speed tests what I do is:
Run with the -XX:-PrintCompilation flag
Warm up the JVM until there are no more compilation debug messages generated and, if possible, timings become consistent
Assume I have a loop (any while or for) like this:
loop{
A long code.
}
From the point of time complexity, should I divide this code in parts, write a function outside the loop, and call that function repeatedly?
I read something about functions very long ago, that calling a function repeatedly takes more time or memory or like something, I don't remember it exactly. Can you also provide some good reference about things like this (time complexity, coding style)?
Can you also provide some reference book or tutorial about heap memory, overheads etc. which affects the performance of program?
The performance difference is probably very minimal in this case. I would concentrate on clarity rather than performance until you identify this portion of your code to be a serious bottleneck.
It really does depend on what kind of code you're running in the loop, however. If you're just doing a tiny mathematical operation that isn't going to take any CPU time, but you're doing it a few hundred thousand times, then inlining the calculation might make sense. Anything more expensive than that, though, and performance shouldn't be an issue.
There is an overhead of calling a function.
So if the "long code" is fast compared to this overhead (and your application cares about performance), then you should definitely avoid the overhead.
However, if the performance is not noticably worse, it's better to make it more readable, by using a (or better multiple) function.
Rule one of performance optmisation: Measure it.
Personally, I go for readable code first and then optimise it IF NECESSARY. Usually, it isn't necessary :-)
See the first line in CHAPTER 3 - Measurement Is Everything
"We should forget about small efficiencies, say about 97% of the time:
premature optimization is the root of all evil." - Donald Knuth
In this case, the difference in performance will probably be minimal between the two solutions, so writing clearer code is the way to do it.
There really isnt a simple "tutorial" on performance, it is a very complex subject and one that even seasoned veterans often dont fully understand. Anyway, to give you more of an idea of what the overhead of "calling" a function is, basically what you are doing is "freezing" the state of your function(in Java there are no "functions" per se, they are all called methods), calling the method, then "unfreezing", where your method was before.
The "freezing" essentially consists of pushing state information(where you were in the method, what the value of the variables was etc) on to the stack, "unfreezing" consists of popping the saved state off the stack and updating the control structures to where they were before you called the function. Naturally memory operations are far from free, but the VM is pretty good at keeping the performance impact to an absolute minimum.
Now keep in mind Java is almost entirely heap based, the only things that really have to get pushed on the stack are the value of pointers(small), your place in the program(again small), and whatever primitives you have local to your method, and a tiny bit of control information, nothing else. Furthermore, although you cannot explicitly inline in Java(though Im sure there are bytecode editors out there that essentially let you do that), most VMs, including the most popular HotSpot VM, will do this automatically for you. http://java.sun.com/developer/technicalArticles/Networking/HotSpot/inlining.html
So the bottom line is pretty much 0 performance impact, if you want to verify for yourself you can always run benchmarking and profiling tools, they should be able to confirm it for you.
From a execution speed point of view it shouldn't matter, and if you still believe this is a bottleneck it is easy to measure.
From a development performance perspective, it is a good idea to keep the code short. I would vote for turning the loop contents into one (or more) properly named methods.
Forget it! You can't gain any performance by doing the job of the JIT. Let JIT inline it for you. Keep the methods short for readability and also for performance, as JIT works better with short methods.
There are microptimizations which may help you gain some performance, but don't even think about them. I suggest the following rules:
Write clean code using appropriate objects and algorithms for readability and for performance.
In case the program is too slow, profile and identify the critical parts.
Think about improving them using better objects and algorithms.
As a last resort, you may also consider microoptimizations.
I've heard that running recursive code on a server can impact performance. How true is this statement, and should recursive method be used as a last resort?
Recursion can potentially consume more memory than an equivalent iterative solution, because the latter can be optimized to take up only the memory it strictly needs, but recursion saves all local variables on the stack, thus taking up a bit more than strictly needed. This is only a problem in a memory-limited environment (which is not a rare situation) and for potentially very deep recursion (a few dozen recursive legs taking up a few hundred bytes each at most will not measurably impact the memory footprint of the server), so "last resort" is an overbid.
But when profiling shows you that the footprint impact is large, one optimization-refactoring you can definitely perform is recursion removal -- a popular topic since a few decades ago in the academic literature, but typically not hard to do by hand (especially if you keep all your methods, recursive or otherwise, reasonably small, as you should;-).
I've heard that running recursive code on a server can impact performance. How true is this
statement?
It is true, it impacts the performance, in the same way creating variables, loops or executing pretty much anything else.
If the recursive code is poor or uncontrolled it will consume your system resources the same way an uncontrolled while loop.
and should recursive method be used as a last resort?
No. It may be used as a first resort, many times it would be easier to code a recursive function. Depends on your skills. But to be clear, there is nothing particularly evil with recursion.
To discuss performance you have to talk about very specific scenarios. Used appropriately recursion is fine. If you use it inappropriately you could blow the stack, or just use too much stack. This is especially true if you somehow get a recursive tailcall without ever it terminating (typically: a bug such as an attempt to walk a cyclic graph), as it won't even blow the stack (it'll just run forever, chomping CPU cycles).
But get it right (and limit the depth to sane amounts) and it is fine.
A badly programmed recursion that does not end has a negative impact on the machine, consuming an ever-grwoing amount of resources, and threatening the stability of the whole system in the worst case.
Otherwise, recursions are a perfectly legitimate tool like loops and other constructs. They have no negative effect on performance per se.
Tail-recursion is also an alternative. It boils down to this: just pass the returned Result as mutable reference as parameter of the recursion method. This way the stack won't blow up. More at Wikipedia and this site.
Recusion is a tool of choice when you have to write algorithms. It's also much easier than iteration when you have to deal with recusive data structures like trees or graph. It's usually harmless if (as a rule of thumb) you can evaluate evaluate the recusion depth to something not too large, provided that you do not forget the end condition...
Most modern compilers are able to optimize some kinds of recursive call (replace them internally with non recursive equivalents). It's specialy easy with tail recursion, that is when the recursive call is the last instruction before returning the result.
However there is some issues specific to Java. The underlying JVM does not provide any kind of goto instruction. This set limits of what the compiler can do. If it's a tail-end recursion internal to one function it can be replaced by a simple loop internal to the function, but if the terminal call is done through another function, or if several functions recusively calls one another it become quite difficult to do when targeting JVM bytecode. SUN JVM does not support tail-call optimization, but there is plans to change that, IBM JVM does support tail-call optimization.
With some languages (functional languages like LISP or Haskell) recursion is also the only (or the more natural) way to write programs. On JVM based functional languages like Clojure or Scala the lack of tail-call optimization is a problem that leads to workarounds like trampolines in Scala.
Running any code on a server can impact performance. Server performance is usually going to be impacted by storage I/O before anything else, so at the level of "server performance" it's odd to see the question of general algorithm strategy talked about.
Deep recursion Can cause a stack overflow, which is nasty. Be careful as it s hard to get up again if you need to. Small, manageable piecs of work is easier to handle and parallize.
I am wondering if there is any performance differences between
String s = someObject.toString(); System.out.println(s);
and
System.out.println(someObject.toString());
Looking at the generated bytecode, it seems to have differences. Is the JVM able to optimize this bytecode at runtime to have both solutions providing same performances ?
In this simple case, of course solution 2 seems more appropriate but sometimes I would prefer solution 1 for readability purposes and I just want to be sure to not introduce performances "decreases" in critical code sections.
The creation of a temporary variable (especially something as small as a String) is inconsequential to the speed of your code, so you should stop worrying about this.
Try measuring the actual time spent in this part of your code and I bet you'll find there's no performance difference at all. The time it takes to call toString() and print out the result takes far longer than the time it takes to store a temporary value, and I don't think you'll find a measurable difference here at all.
Even if the bytecode looks different here, it's because javac is naive and your JIT Compiler does the heavy lifting for you. If this code really matters for speed, then it will be executed many, many times, and your JIT will select it for compilation to native code. It is highly likely that both of these compile to the same native code.
Finally, why are you calling System.out.println() in performance-critical code? If anything here is going to kill your performance, that will.
If you have critical code sections that demand performance, avoid using System.out.println(). There is more overhead incurred by going to standard output than there ever will be with a variable assignment.
Do solution 1.
Edit: or solution 2
There is no* code critical enough that the difference between your two samples makes any difference at all. I encourage you to test this; run both a few million times, and record the time taken.
Pick the more readable and maintainable form.
* Exaggerating for effect. If you have code critical enough, you've studied it to learn this.
The generated bytecode is not a good measure of the the performance of an given piece of code, since this bytecode will get analysed, optimised and ( in case of the server compiler ) re-analysed and re-optimised if it is deemed to be a performance bottleneck.
When in doubt, use a profiler.
Compared to output to the console, I doubt that any difference in performance between the two is going to be measurable. Don't optimize before you have measured and confirmed that you have a problem.