Boolean expressions optimizations in Java - java

Consider the following method in Java:
public static boolean expensiveComputation() {
for (int i = 0; i < Integer.MAX_VALUE; ++i);
return false;
}
And the following main method:
public static void main(String[] args) {
boolean b = false;
if (expensiveComputation() && b) {
}
}
Logical conjunction (same as &&) is a commutative operation. So why the compiler doesn't optimize the if-statement code to the equivalent:
if (b && expensiveComputation()) {
}
which has the benefits of using short-circuit evaluation?
Moreover, does the compiler try to make other logic simplifications or permutation of booleans in order to generate faster code? If not, why? Surely some optimizations would be very difficult, but my example isn't simple? Calling a method should always be slower than reading a boolean, right?
Thank you in advance.

It doesn't do that because expensiveComputation() may have side effects which change the state of the program. This means that the order in which the expressions in the boolean statements are evaluated (expensiveComputation() and b) matters. You wouldn't want the compiler optimizing a bug into your compiled program, would you?
For example, what if the code was like this
public static boolean expensiveComputation() {
for (int i = 0; i < Integer.MAX_VALUE; ++i);
b = false;
return false;
}
public static boolean b = true;
public static void main(String[] args) {
if (expensiveComputation() || b) {
// do stuff
}
}
Here, if the compiler performed your optimization, then the //do stuff would run when you wouldn't expect it to by looking at the code (because the b, which is originally true, is evaluated first).

Because expensiveComputation() may have side-effects.
Since Java doesn't aim to be a functionally pure language, it doesn't inhibit programmers from writing methods that have side-effects. Thus there probably isn't a lot of value in the compiler analyzing for functional purity. And then, optimizations like you posit are unlikely to be very valuable in practice, as expensiveComputation() would usually be required to executed anyway, to get the side effects.
Of course, for a programmer, it's easy to put the b first if they expect it to be false and explicitly want to avoid the expensive computation.

Actually, some compilers can optimise programs like the one you suggested, it just has to make sure that the function has no side-effects. GCC has a compiler directive you can annotate a function with to show that it has no side-effects, which the compiler may then use when optimizing. Java may have something similar.
A classic example is
for(ii = 0; strlen(s) > ii; ii++) < do something >
which gets optimized to
n = strlen(s); for(ii = 0; n > ii; ii++) < do something >
by GCC with optimization level 2, at least on my machine.

The compiler will optimize this if you run the code often enough, probably by inlining the method and simplifying the resulting boolean expression (but most likely not by reordering the arguments of &&).
You can benchmark this by timing a loop of say a million iterations of this code repeatedly. The first iteration or two are much slower than the following.

The version of java I am using optimises a in an expression a && b but not with b.
i.e. If a is false, b does not get evaluated but if b was false it did not do this.
I found this out when I was implementing validation in a website form: I created messages to display on the web-page in a series of boolean methods.
I expected of the fields in the page which were incorrectly entered to become highlighted but, because of Java's speed-hack, the code was only executed until the first incorrect field was discovered. After that, Java must have thought something like "false && anything is always false" and skipped the remaining validation methods.
I suppose, as a direct answer to your question, if you make optimisations like this, your program may run slower than it could. However, someone else's program will completely break because they have assumed the non-optimised behaviour like the side effect thing mentioned in other answers.
Unfortunately, it's difficult to automate intelligent decisions, especially with imperative languages (C, C++, Java, Python,... i.e the normal languages).

Related

Expression evaluation in C vs Java

int y=3;
int z=(--y) + (y=10);
when executed in C language the value of z evaluates to 20
but when the same expression in java, when executed gives the z value as 12.
Can anyone explain why this is happening and what is the difference?
when executed in C language the value of z evaluates to 20
No it does not. This is undefined behavior, so z could get any value. Including 20. The program could also theoretically do anything, since the standard does not say what the program should do when encountering undefined behavior. Read more here: Undefined, unspecified and implementation-defined behavior
As a rule of thumb, never modify a variable twice in the same expression.
It's not a good duplicate, but this will explain things a bit deeper. The reason for undefined behavior here is sequence points. Why are these constructs using pre and post-increment undefined behavior?
In C, when it comes to arithmetic operators, like + and /, the order of evaluation of the operands is not specified in the standard, so if the evaluation of those has side effects, your program becomes unpredictable. Here is an example:
int foo(void)
{
printf("foo()\n");
return 0;
}
int bar(void)
{
printf("bar()\n");
return 0;
}
int main(void)
{
int x = foo() + bar();
}
What will this program print? Well, we don't know. I'm not entirely sure if this snippet invokes undefined behavior or not, but regardless, the output is not predictable. I made a question, Is it undefined behavior to use functions with side effects in an unspecified order? , about that, so I'll update this answer later.
Some other variables have specified order (left to right) of evaluation, like || and && and this feature is used for short circuiting. For instance, if we use the above example functions and use foo() && bar(), only the foo() function will be executed.
I'm not very proficient in Java, but for completeness, I want to mention that Java basically does not have undefined or unspecified behavior except for very special situations. Almost everything in Java is well defined. For more details, read rzwitserloot's answer
There are 3 parts to this answer:
How this works in C (unspecified behaviour)
How this works in Java (the spec is clear on how this should be evaluated)
Why is there a difference.
For #1, you should read #klutt's fantastic answer.
For #2 and #3, you should read this answer.
How does it work in java?
Unlike in C, java's language specification is far more clearly specified. For example, C doesn't even tell you how many bits the data type int is supposed to have, whereas the java lang spec does: 32 bits. Even on 64-bit processors and a 64-bit java implementation.
The java spec clearly says that x+y is to be evaluated left-to-right (vs. C's 'in any order you please, compiler'), thus, first --y is evaluated which is clearly 2 (with the side-effect of making y 2), and then y=10 is evaluated which is clearly 10 (with the side effect of making y 10), and then 2+10 is evaluated which is clearly 12.
Obviously, a language like java is just better; after all, undefined behaviour is pretty much a bug by definition, whatever was wrong with the C lang spec writers to introduce this crazy stuff?
The answer is: performance.
In C, your source code is turned into machine code by the compiler, and the machine code is then interpreted by the CPU. A 2-step model.
In java, your source code is turned into bytecode by the compiler, the bytecode is then turned into machine code by the runtime, and the machine code is then interpreted by the CPU. A 3-step model.
If you want to introduce optimizations, you don't control what the CPU does, so for C there is only 1 step where it can be done: Compilation.
So C (the language) is designed to give lots of freedom to C compilers to attempt to produce optimized machine code. This is a cost/benefit scenario: At the cost of having a ton of 'undefined behaviour' in the lang spec, you get the benefit of better optimizing compilers.
In java, you get a second step, and that's where java does its optimizations: At runtime. java.exe does it to class files; javac.exe is quite 'stupid' and optimizes almost nothing. This is on purpose; at runtime you can do a better job (for example, you can use some bookkeeping to track which of two branches is more commonly taken and thus branch predict better than a C app ever could) - it also means that cost/benefit analysis now results in: The lang spec should be clear as day.
So java code is never undefined behaviour?
Not so. Java has a memory model which includes a ton of undefined behaviour:
class X { int a, b; }
X instance = new X();
new Thread() { public void run() {
int a = instance.a;
int b = instance.b;
instance.a = 5;
instance.b = 6;
System.out.print(a);
System.out.print(b);
}}.start();
new Thread() { public void run() {
int a = instance.a;
int b = instance.b;
instance.a = 1;
instance.b = 2;
System.out.print(a);
System.out.print(b);
}}.start();
is undefined in java. It may print 0056, 0012, 0010, 0002, 5600, 0600, and many many more possibilities. Something like 5000 (which it could legally print) is hard to imagine: How can the read of a 'work' but the read of b then fail?
For the exact same reason your C code produces arbitrary answers:
Optimization.
The cost/benefit of 'hardcoding' in the spec exactly how this code would behave would have a large cost to it: You'd take away most of the room for optimization. So java paid the cost and now has a langspec that is ambigous whenever you modify/read the same fields from different threads without establish so-called 'comes-before' guards using e.g. synchronized.
When executed in C language the value of z evaluates to 20
It is not the truth. The compiler you use evaluates it to 20. Another one can evaluate it completely different way: https://godbolt.org/z/GcPsKh
This kind of behaviour is called Undefined Behaviour.
In your expression you have two problems.
Order of eveluation (except the logical expressions) is not specified in C (it is an Unspecified Behaviour)
In this expression there is also problem with the sequence point (Undefined Bahaviour)

How are JVM optimizations based on assumptions?

In section 12.3.3., "Unrealistic Sampling of Code Paths" the Java Concurrency In Practice book says:
In some cases, the JVM
may make optimizations based on assumptions that may only be true temporarily, and later back them out by invalidating the compiled code if they become untrue
I cannot understand above statement.
What are these JVM assumptions?
How does the JVM know whether the assumptions are true or untrue?
If the assumptions are untrue, does it influence the correctnes of my data?
The statement that you quoted has a footnote which gives an example:
For example, the JVM can use monomorphic call transformation to convert a virtual method call to a direct method call if no classes currently loaded override that method, but it invalidates the compiled code if a class is subsequently loaded that overrides the method.
The details are very, very, very complex here. So the following is a extremely oversimpilified example.
Imagine you have an interface:
interface Adder { int add(int x); }
The method is supposed to add a value to x, and return the result. Now imagine that there is a program that uses an implementation of this class:
class OneAdder implements Adder {
int add(int x) {
return x+1;
}
}
class Example {
void run() {
OneAdder a1 = new OneAdder();
int result = compute(a1);
System.out.println(result);
}
private int compute(Adder a) {
int sum = 0;
for (int i=0; i<100; i++) {
sum = a.add(sum);
}
return sum;
}
}
In this example, the JVM could do certain optimizations. A very low-level one is that it could avoid using a vtable for calling the add method, because there is only one implementation of this method in the given program. But it could even go further, and inline this only method, so that the compute method essentially becomes this:
private int compute(Adder a) {
int sum = 0;
for (int i=0; i<100; i++) {
sum += 1;
}
return sum;
}
and in principle, even this
private int compute(Adder a) {
return 100;
}
But the JVM can also load classes at runtime. So there may be a case where this optimization has already been done, and later, the JVM loads a class like this:
class TwoAdder implements Adder {
int add(int x) {
return x+2;
}
}
Now, the optimization that has been done to the compute method may become "invalid", because it's not clear whether it is called with a OneAdder or a TwoAdder. In this case, the optimization has to be undone.
This should answer 1. of your question.
Regarding 2.: The JVM keeps track of all the optimizations that have been done, of course. It knows that it has inlined the add method based on the assumption that there is only one implementation of this method. When it finds another implementation of this method, it has to undo the optimization.
Regarding 3.: The optimizations are done when the assumptions are true. When they become untrue, the optimization is undone. So this does not affect the correctness of your program.
Update:
Again, the example above was very simplified, referring to the footnote that was given in the book. For further information about the optimization techniques of the JVM, you may refer to https://wiki.openjdk.java.net/display/HotSpot/PerformanceTechniques . Specifically, the speculative (profile-based) techniques can probably be considered to be mostly based on "assumptions" - namely, on assumptions that are made based on the profiling data that has been collected so far.
Taking the quoted text in context, this section of the book is actually talking about the importance of using realistic text data (inputs) when you do performance testing.
Your questions:
What are these JVM assumptions?
I think the text is talking about two things:
On the one hand, it seems to be talking about optimizing based on the measurement of code paths. For example whether the "then" or "else" branch of an if statement is more likely to be executed. This can indeed result in generation of different code and is susceptible to producing sub-optimal code if the initial measurements are incorrect.
On the other hand, it also seems to be talking about optimizations that may turn out to be invalid. For example, at a certain point in time, there may be only one implementation of a given interface method that has been loaded by the JVM. On seeing this, the optimizer may decide to simplify the calling sequence to avoid polymorphic method dispatching. (The term used in the book for this a "monomorphic call transformation".) A bit latter, a second implementation may be loaded, causing the optimizer to back out that optimization.
The first of these cases only affects performance.
The second of these would affect correctness (as well as performance) if the optimizer didn't back out the optimization. But the optimizer does do that. So it only affects performance. (The methods containing the affected calls need to be re-optimized, and that affects overall performance.)
How do JVM know the assumptions are true or untrue?
In the first case, it doesn't.
In the second case, the problem is noticed when the JVM loads the 2nd method, and sees a flag on (say) the interface method that says that the optimizer has assumed that it is effectively a final method. On seeing this, the loader triggers the "back out" before any damage is done.
If the assumptions are untrue, does it influence the correctness of my data?
No it doesn't. Not in either case.
But the takeaway from the section is that the nature of your test data can influence performance measurements. And it is not simply a matter of size. The test data also needs to cause the application to behave the same way (take similar code paths) as it would behave in "real life".

The same computation inside a loop on a constant

In an interface I have the following:
public static byte[] and0xFFArray(byte[] array) {
for (int i = 0; i < array.length; i++) {
array[i] = (byte) (array[i] & 0xFF);
}
return array;
}
In another class I am calling the following:
while(true){
...
if (isBeforeTerminator(htmlInput, ParserI.and0xFFArray("포토".getBytes("UTF-8")), '<')) {
...
}
...
}
My question is, will the resultant array from String constant be computed once during compilation or will it be computed everytime the loop iterates?
Edit: I just noticed that the method doesn't make sense, but it doesn't affect the question.
I assume that you're referring to the result of
ParserI.and0xFFArray("포토".getBytes("UTF-8"))
Unless you explicitly cache/store the results somewhere, it'll be computed every time you call it.
You may want to consider something like:
byte[] parserI = ParserI.and0xFFArray("포토".getBytes("UTF-8"));
while (true) {
...
if (isBeforeTerminator(htmlInput, parserI, '<'))
...
To understand why compilers don't implement this automatically, keep in mind that you can't write a general algorithm to detect if a particular method will always return the same value as you'd quickly encounter things like the Halting Problem, so anything you try to write to do something like that would be massively complicated and wouldn't even work a good percent of the time. You'd also have to understand a fair amount about when a method will be called in order to work out a reasonable caching strategy. For example, is it worth persisting the cache after the for loop? You'd have to understand a fair amount about the program structure to know for sure.
It is possible that an optimizer could recognize that the results of a method are constant under certain limited circumstances (and I'm not sure the extent to which Java optimizers have actually implemented that), but you certainly can't count on that in the general case. The only way to know for sure if this is one of them is to look at the actual bytecode that the compiler produces, but I highly doubt that it's being as smart as you'd like it to here for the reasons I listed above. It's better to explicitly do the caching yourself as shown above.

Does Java optimize function calls for unnecessary boolean comparisons at runtime?

Will java actually call all three functions at runtime if one of them does not return true?
Often for debugging purposes, I like to see the result of these conditional checks in the debugger and putting them in one line makes that much more difficult. If there are half a dozen checks, then the code gets to be more unreadable.
boolean result1 = CheckSomething();
boolean result2 = CheckSomethingElse();
boolean result3 = CheckSomeOtherThing();
boolean finalResult = result1 && result2 && result3;
If Java doesn't optimize this conditional at runtime, then better performance should be achieved by writing the code like this at a cost of not being able to easily see the intermediary values in the debugger.
boolean finalResult = CheckSomething() && CheckSomethingElse() && CheckSomeOtherThing();
In general your code snippets are not equivalent. The first version must behave as-if all functions are called since you call them unconditionally in your code.
The second version of your code:
boolean finalResult = CheckSomething() && CheckSomethingElse() && CheckSomeOtherThing();
is not equivalent because the && operation is shortcut, so the remaining functions will not be called if any returns false.
When running in the interpreter, you can expect that the first version of your code will actually call all the methods.
Now, the interesting part: after the code is compiled, the compiler is now able to "look into" the various functions determine that they are pure functions, without side effects, and it may compile the two versions identically. In fact, if the functions are very simple, it may be better to compile both versions to always call all three functions, since combining three boolean results and checking it once may be much faster if any of the results are unpredictable.
The compiler may not always be able to determine whether a function has side effects, or there may actually be side effects that only you know don't matter (e.g., an ArrayOutOfBoundsException that you know won't occur but the compiler can't prove). So it's generally better to write the second form, which allows to compiler to use short-circuit evaluation regardless of what it is able to prove about the functions.
You can write this:
boolean result1 = condition1();
boolean result2 = result1 && condition2();
boolean result3 = result2 && condition3();
This allows you to step through each condition, and to see the intermediate results without executing unnecessary code.
You can also set a single breakpoint after the result3 line, which shows you the first result that returned false.
Agreed, that the second example will keep evaluating until it reaches a false condition, then it will return "false". The short circuit operator && is being used, therefore once a false condition is encountered, evaluation stops.

Reordering arguments using recursion (pro, cons, alternatives)

I find that I often make a recursive call just to reorder arguments.
For example, here's my solution for endOther from codingbat.com:
Given two strings, return true if either of the strings appears at the very end of the other string, ignoring upper/lower case differences (in other words, the computation should not be "case sensitive"). Note: str.toLowerCase() returns the lowercase version of a string.
public boolean endOther(String a, String b) {
return a.length() < b.length() ? endOther(b, a)
: a.toLowerCase().endsWith(b.toLowerCase());
}
I'm very comfortable with recursions, but I can certainly understand why some perhaps would object to it.
There are two obvious alternatives to this recursion technique:
Swap a and b traditionally
public boolean endOther(String a, String b) {
if (a.length() < b.length()) {
String t = a;
a = b;
b = t;
}
return a.toLowerCase().endsWith(b.toLowerCase());
}
Not convenient in a language like Java that doesn't pass by reference
Lots of code just to do a simple operation
An extra if statement breaks the "flow"
Repeat code
public boolean endOther(String a, String b) {
return (a.length() < b.length())
? b.toLowerCase().endsWith(a.toLowerCase())
: a.toLowerCase().endsWith(b.toLowerCase());
}
Explicit symmetry may be a nice thing (or not?)
Bad idea unless the repeated code is very simple
...though in this case you can get rid of the ternary and just || the two expressions
So my questions are:
Is there a name for these 3 techniques? (Are there more?)
Is there a name for what they achieve? (e.g. "parameter normalization", perhaps?)
Are there official recommendations on which technique to use (when)?
What are other pros/cons that I may have missed?
Another example
To focus the discussion more on the technique rather than the particular codingbat problem, here's another example where I feel that the recursion is much more elegant than a bunch of if-else's, swaps, or repetitive code.
// sorts 3 values and return as array
static int[] sort3(int a, int b, int c) {
return
(a > b) ? sort3(b, a, c) :
(b > c) ? sort3(a, c, b) :
new int[] { a, b, c };
}
Recursion and ternary operators don't bother me as much as it bothers some people; I honestly believe the above code is the best pure Java solution one can possibly write. Feel free to show me otherwise.
Let’s first establish that code duplication is usually a bad idea.
So whatever solution we take, the logic of the method should only be written once, and we need a means of swapping the arguments around that does not interfere with the logic.
I see three general solutions to that:
Your first recursion (either using if or the conditional operator).
swap – which, in Java, is a problem, but might be appropriate in other languages.
Two separate methods (as in #Ha’s solution) where one acts as the implementation of the logic and the other as the interface, in this case to sort out the parameters.
I don’t know which of these solutions is objectively the best. However, I have noticed that there are certain algorithms for which (1) is generally accepted as the idiomatic solution, e.g. Euklid’s algorithm for calculating the GCD of two numbers.
I am generally averse to the swap solution (2) since it adds an extra call which doesn’t really do anything in connection with the algorithm. Now, technically this isn’t a problem – I doubt that it would be less efficient than (1) or (3) using any decent compiler. But it adds a mental speed-bump.
Solution (3) strikes me as over-engineered although I cannot think of any criticism except that it’s more text to read. Generally, I don’t like the extra indirection introduced by any method suffixed with “Impl”.
In conclusion, I would probably prefer (1) for most cases although I have in fact used (3) in similar circumstances.
Another +1 for "In any case, my recommendation would be to do as little in each statement as possible. The more things that you do in a single statement, the more confusing it will be for others who need to maintain your code."
Sorry but your code:
// sorts 3 values and return as array
static int[] sort3(int a, int b, int c) {
return
(a > b) ? sort3(b, a, c) :
(b > c) ? sort3(a, c, b) :
new int[] { a, b, c };
}
It's perhaps for you the best "pure java code", but for me it's the worst... unreadable code, if we don't have the method or the comment we just can't know at first sight what it's doing...
Hard to read code should only be used when high performances are needed (but anyway many performances problems are due to bad architecture...). If you HAVE TO write such code, the less you can do is to make a good javadoc and unit tests... we developper often don't care about implementation of such methods if we just have to use it, and not to rework it... but since the first sight doesn't tell us what is does, we can have to trust it works like we expect it does and we can loose time...
Recursive methods are ok when it's a short method, but i think a recursive method should be avoided if the algorithm is complex and if there's another way to do it for almost the same computation time... Particulary if other peoples will prolly work in this method.
For your exemple it's ok since it's a short method, but anyway if you'r just not concerned by performances you could have used something like that:
// sorts int values
public static int[] sort(Integer... intValues) {
ArrayList list = new ArrayList(
for ( Integer i : intValues ) {
list.add(i);
}
Collections.sort(list);
return list.toArray();
}
A simple way to implement your method, easily readable by all java >= 1.5 developper, that works for 1 to n integers...
Not the fastest but anyway if it's just about speed use c++ or asm :)
For this particular example, I wouldn't use anything you suggested.. I would instead write:
public boolean endOther(String a, String b){
String alower=a.toLowerCase();
String blower=b.toLowerCase();
if ( a.length() < b.length() ){
return blower.endsWith(alower);
} else {
return alower.endsWith(blower);
}
}
While the ternary operator does have its place, the if statement is often more intelligible, especially when the operands are fairly complex. In addition, if you repeat code in different branches of an if statement, they will only be evaluated in the branch that is taken (in many programming languages, both operands of the ternary operator are evaluated no matter which branch is selected). While, as you have pointed out, this is not a concern in Java, many programmers have used a variety of languages and might not remember this level of detail, and so it is best to use the ternary operator only with simple operands.
One frequently hears of "recursive" vs. "iterative"/"non-recursive" implementations. I have not heard of any particular names for the various options that you have given.
In any case, my recommendation would be to do as little in each statement as possible. The more things that you do in a single statement, the more confusing it will be for others who need to maintain your code.
In terms of your complaint about repetitiion... if there are several lines that are being repated, then it is time to create a "helper" function that does that part. Function composition is there to reduce repitition. Swapping just doesn't make any sense to do, since there is more effort to swap than to simply repeat... also, if code later in the function uses the parameters, the parameters now mean different things than they used to.
EDIT
My argument vis-a-vis the ternary operator was not a valid one... the vast majority of programming languages use lazy evalution with the ternary operator (I was thinking of Verilog at the time of writing, which is a hardware description language (HDL) in which both branches are evaluated in parallel). That said, there are valid reasons to avoid using complicated expressions in ternary operators; for example, with an if...else statement, it is possible to set a breakpoint on one of the conditional branches whereas, with the ternary operator, both branches are part of the same statement, so most debuggers won't split on them.
It is slightly better to use another method instead of recursion
public boolean endOther(String a, String b) {
return a.length() < b.length() ? endOtherImpl(b,a):endOtherImpl(a,b);
}
protected boolean endOtherImpl(String longStr,String shortStr)
{
return longStr.toLowerCase().endsWith(shortStr.toLowerCase());
}

Categories