Will Proguard or the Compiler Precalculate TimeUnit.Minutes.toMillis(120) - java

Currently I have a class with the following effectively constant field.
private static final long ACTIVITY_TIMEOUT_MS = 1 * 60 * 1000;
This is fine, but still not the most readable code in the world. What I'd rather use is the following:
private static final long ACTIVITY_TIMEOUT_MS = TimeUnit.MINUTES.toMillis(1);
Which clearly states I want the time to be 1 minute but that the field is milliseconds.
My question is will either the compiler or perhaps proguard fix this so there is no performance hit? If there will be a performance hit, can I expect that it is a one time hit per instance of the class?

Yes, this will be a one-time hit on class loading, and it will be such a tiny fraction of class loading that it's probably not even measurable against the overhead of loading the class in the first place.
No, the compiler can't figure it out, and I would be fairly surprised if ProGuard could, but it really doesn't matter.

Related

Unit testing code that relies on constant values

Consider the following (totally contrived) example:
public class Length {
private static final int MAX_LENGTH = 10;
private final int length;
public Length(int length) {
if (length > MAX_LENGTH)
throw new IllegalArgumentException("Length too long");
this.length = length;
}
}
I would like to test that this throws an exception when called with a length greater than MAX_LENGTH. There are a number of ways this can be tested, all with disadvantages:
#Test(expected = IllegalArgumentException.class)
public void testMaxLength() {
new Length(11);
}
This replicates the constant in the testing case. If MAX_LENGTH becomes smaller this will silently no longer be an edge case (though clearly it should be paired with a separate case to test the other side of the edge). If it becomes larger this will fail and need to be changed manually (which might not be a bad thing).
These disadvantages can be avoided by adding a getter for MAX_LENGTH and then changing the test to:
new Length(Length.getMaxLength());
This seems much better as the test does not need to be changed if the constant changes. On the other hand it is exposing a constant that would otherwise be private and it has the significant flaw of testing two methods at once - the test might give a false positive if both methods are broken.
An alternative approach is to not use a constant at all but, rather, inject the dependency:
interface MaxLength {
int getMaxLength();
}
public class Length {
public static void setMaxLength(MaxLength maxLength);
}
Then the 'constant' can be mocked as part of the test (example here using Mockito):
MaxLength mockedLength = mock(MaxLength.class);
when(mokedLength.getMaxLength()).thenReturn(17);
Length.setMaxLength(mockedLength);
new Length(18);
This seems to be adding a lot of complexity for not a lot of value (assuming there's no other reason to inject the dependency).
At this stage my preference is to use the second approach of exposing the constants rather than hardcoding the values in the test. But this does not seem ideal to me. Is there a better alternative? Or is the lack of testability of these cases demonstrating a design flaw?
As Tim alluded to in the comments, your goal is to make sure that your software behaves according to the specifications. One such specification might be that the maximum length is always 10, at which point it'd be unnecessary to test a world where length is 5 or 15.
Here's the question to ask yourself: How likely is it that you'll want to use your class with a different value of the "constant"? I've quoted "constant" here because if you vary the value programmatically, it's not really a constant at all, is it? :)
If your value will never ever change, you could not use a symbolic constant at all, just comparing to 10 directly and testing based on (say) 0, 3, 10, and 11. This might make your code and tests a little hard to understand ("Where did the 10 come from? Where did the 11 come from?"), and will certainly make it hard to change if you ever do have reason to vary the number. Not recommended.
If your value will probably never change, you could use a private named constant (i.e. a static final field), as you have. Then your code will be easy enough to change, though your tests won't be able to automatically adjust the way your code would.
You could also relax to package-private visibility, which would be available to tests in the same package. Javadoc (e.g. /** Package-private for testing. */) or documentation annotations (e.g. #VisibleForTesting) may help make your intentions clear. This is a nice option if your constant value is intended to be opaque and unavailable outside of your class, like an URL template or authentication token.
You could even make it a public constant, which would be available to consumers of your class as well. For your example of a constant Length, a public static final field is probably best, on the assumption that other pieces of your system may want to know about that (e.g. for UI validation hints or error messages).
If your value is likely to change you could accept it per-instance, as in new Length(10) or new Length().setMaxLength(10). (I consider the former to be a form of dependency injection, counting the constant integer as a dependency.) This is also a good idea if you wanted to use a different value in tests, such as using a maximum length of 2048 in production but testing against 10 for practicality's sake. To make a flexible length validator, this option is probably a good upgrade from a static final field.
Only if your value is likely to change during your instance's lifetime would I bother with a DI-style value provider. At that point, you can query the value interactively, so it doesn't behave at all like a constant. For "length", that'd be obvious overkill, but maybe not for "maximum allowed memory", "maximum simultaneous connections", or some other pseudo-constants like that.
In short, you'll have to decide how much control you need, and then you can pick the most straightforward choice from there; as a "default", you may want to make it a visible field or constructor parameter, as those tend to have good balance of simplicity and flexibility.

Is there any performance gain from using final modifier on non-primitive static data?

Is there any performance gain from using final modifier on non-primitive static data in Java?
For example:
static final Thread t2 = new Thread(new Thread_2());
versus:
static Thread t2 = new Thread(new Thread_2());
I mean, static final for primitives defines a true constant and is good to be used, in the sense that it's value is known at compile time, but could the use of final trigger any optimizations in this non-primitive case?
Does using final in this case does something or it's a waste?
Not style-wise/good practice answers please, but performance-wise only.
The JVM can make some optimizations if it knows a value will never change (for example, disabling null checks), so it will lead to some small performance gain. In almost all circumstances, though, this gain will be too small to notice or worry about. I don't know much about your program, but I would guess that time spent making variables final would be better spent on developing a more efficient algorithm.
While there can be a small performance gain, and some static final values will be embedded in the class in which they are used, to my mind the biggest benefit is from the compiler enforcing the intent & design that the value does not change - that is, the gain is to the developer(s).
And in my way of thinking, skeleton/framework code should be the best it can be in every way, not just performance, but stylistically too.
I would encourage you to declare everything as final unless it has a real need to be mutable - that is, things are final by default, unless needed to be otherwise.

Performance of Overriding vs. if-statement

I'm extending and improving a Java application which also does long running searches with a small DSL (in detail it is used for Model-Finding, yes it's in general NP-Complete).
During this search I want to show a small progress bar on the console. Because of the generic structure of the DSL I cannot calculate the overall search space size. Therefore I can only output the progress of the first "backtracking" statement.
Now the question:
I can use a flag for each backtracking statement to indicate that this statement should report the progress. When evaluating the statement I can check the flag with an if-statement:
public class EvalStatement {
boolean reportProgress;
public EvalStatement(boolean report) {
reportProgress = report;
}
public void evaluate() {
int progress = 0;
while(someCondition) {
// do something
// maybe call other statement (tree structure)
if (reportProgress) {
// This is only executed by the root node, i. e.,
// the condition is only true for about 30 times whereas
// it is false millions or billions of times
++progress;
reportProgress(progress);
}
}
}
}
I can also use two different classes:
A class which does nothing
A subclass that is doing the output
This would look like this:
public class EvalStatement {
private ProgressWriter out;
public EvalStatement(boolean report) {
if (report)
out = new ProgressWriterOut();
else
out = ProgressWriter.instance;
}
public void evaluate() {
while(someCondition) {
// do something
// maybe call other statement (tree structure)
out.reportProgress(progress);
}
}
}
public class ProgressWriter {
public static ProgressWriter instance = new ProgressWriter();
public void reportProgress(int progress) {}
}
public class ProgressWriterOut extends ProgressWriter {
int progress = 0;
public void reportProgress(int progress) {
// This is only executed by the root node, i. e.,
// the condition is only true for about 30 times whereas
// it is false millions or billions of times
++progress;
// Put progress anywhere, e. g.,
System.out.print('#');
}
}
An now really the question(s):
Is the Java lookup of the method to call faster then the if statement?
In addition, would an interface and two independet classes be faster?
I know Log4J recommends to put an if-statement around log-calls, but I think the main reason is the construction of the parameters, espacially strings. I have only primitive types.
EDIT:
I clarified the code a little bit (what is called often... the usage of the singleton is irrelevant here).
Further, I made two long-term runs of the search where the if-statement respectively the operation call was hit 1.840.306.311 times on a machine doing nothing else:
The if version took 10h 6min 13sek (50.343 "hits" per second)
The or version took 10h 9min 15sek (50.595 "hits" per second)
I would say, this does not give a real answer, because the 0,5% difference is in the measuring tolerance.
My conclusion: They more or less behave the same, but the overriding approach could be faster in the long-term as guessed by Kane in the answers.
I think this is the text book definition of over-optimization. You're not really even sure you have a performance problem. Unless you're making MILLIONS of calls across that section it won't even show up in your hotspot reports if you profiled it. If statements, and methods calls are on the order of nanoseconds to execute. So in order for a difference between them you are talking about saving 1-10ns at the most. For that to even be perceived by a human as being slow it needs to be in the order of 100 milliseconds, and that's if they user is even paying attention like actively clicking, etc. If they're watching a progress bar they aren't even going to notice it.
Say we wanted to see if that added even 1s extra time, and you found one of those could save 10 ns (it's probably like a savings of 1-4ns). So that would mean you'd need that section to be called 100,000,000 times in order to save 1s. And I can guarantee you if you have 100 Million calls being made you'll find 10 other areas that are more expensive than the choice of if or polymorphism there. Seems sorta silly to debate the merits of 10ns on the off chance you might save 1s doesn't it?
I'd be more concerned about your usage of a singleton than performance.
I wouldn't worry about this - the cost is very small, output to the screen or computation would be much slower.
The only way to really answer this question is to try both and profile the code under normal circumstances. There are lots of variables.
That said, if I had to guess, I would say the following:
In general, an if statement compiles down to less bytecode than a method call, but with a JIT compiler optimizing, your method call may get inlined, which is no bytecode. Also, with branch-prediction of the if-statement, the cost is minimal.
Again, in general, using the interfaces will be faster than testing if you should report every time the loop is run. Over the long run, the cost of loading two classes, testing once, and instantiating one, is going to be less than running a particular test eleventy bajillion times. Over the long term.
Again, the better way to do this would be to profile the code on real world examples both ways, maybe even report back your results. However, I have a hard time seeing this being the performance bottleneck for your application... your time is probably better spent optimizing elsewhere if speed is a concern.
Putting anything on the monitor is orders of magnitude slower than either choice. If you really got a performance problem there (which I doubt) you'd need to reduce the number of calls to print.
I would assume that method lookup is faster than evaluating if(). In fact, also the version with the if needs a method lookup.
And if you really want to squeeze out every bit of performance, use private final methods in your ProgessWriter's, as this can allow the JVM to inline the method so there would be no method lookup, and not even a method call in the machine code derived from the byte code after it is finally compiled.
But, probably, they are both rather close in performance. I would suggest to test/profile, and then concentrate on the real performance issues.

Will things run quicker if I make my variables final?

I'm writing for Android (Java).
I'm declaring int's and float's as part of an ongoing loop.
Some of them don't need to be changed after declaration.
If I set them all to final when declaring, will things run quicker?
[Edit]
Thanks everyone. I didn't actually expect it to make any improvements, I just noticed, after browsing the source of various large projects, it was fairly common. Cheers
Things will not run quicker. The final keyword is just compile time syntactic sugar.
If it were actually static final, then you could take benefit of compiletime calculation and inlining of the value in any refernce. So, with for example:
private static final long ONE_WEEK_IN_MILLIS = 7 * 24 * 60 * 60 * 1000L;
public void foo(Date date) {
if (date.getTiem() > System.currentTimeMillis() + ONE_WEEK_IN_MILLIS) {
// No idea what to do here?
}
}
the compiler will optimize one and other so that it ends up like:
private static final long ONE_WEEK_IN_MILLIS = 604800000L;
public void foo(Date date) {
if (date.getTiem() > System.currentTimeMillis() + 604800000L) {
// No idea what to do here?
}
}
If you run a decompiler, you'll see it yourself.
Although setting to final might have impact on the speed, the answer will most probably be different for each VM or device.
Declaring them final, however, doesn't hurt, and one could even call it good programming style.
As for performance, this looks almost certainly like premature optimization. Profile, find bottlenecks, rethink your algorithms. Don't waste your time with "final" just because of performance - it will barely solve any problem.
If you also make it static (a class variable) it can increase performance, and it is also good programming practice to use final for variables that you know will not change. Though, you may not want it to be a class variable, in which case, I am not sure if can improves performance, but I think it may in many cases.
http://docs.sun.com/app/docs/doc/819-3681/6n5srlhjs?a=view
The dynamic compiler can perform some constant folding optimizations easily, when you declare constants as static final variables.
Declare method arguments final if they are not modified in the method. In general, declare all variables final if they are not modified after being initialized or set to some value.
So for example, if you have code that multiples two of your final variables, during run-time the VM may use what would normally be sleep/downtime to calculate the result of that multiplication so it doesn't have to do it in the busy periods.
I'd consider it a good practice to make variables final (you might use Eclipse's Preferences > Java > Code Style > Clean Up to do so). While performance might actually improve, I'd expect the differences to be negligible. In my opinion, it helps with readability of code though (i.e. no need to look for assignments) which certainly is a Good Thing (tm).
When we declare any variable final, means at compilation time it would be identified and while running the application JVM does not check it for any manipulation as it is declared as final(constant). so definately we are removind overhead from JVM.
so we could say it will improve performance, if depends on your case is the variable is constant make it final
better if you make if static final.....
they are optimized by JVM are kept in the Constant Pool with the Classfile "http://negev.wordpress.com/java-memory-brief/"

Java execution speed

I'm new to Java programming.
I am curious about speed of execution and also speed of creation and distruction of objects.
I've got several methods like the following:
private static void getAbsoluteThrottleB() {
int A = Integer.parseInt(Status.LineToken.nextToken());
Status.AbsoluteThrottleB=A*100/255;
Log.level1("Absolute Throttle Position B: " + Status.AbsoluteThrottleB);
}
and
private static void getWBO2S8Volts() {
int A = Integer.parseInt(Status.LineToken.nextToken());
int B = Integer.parseInt(Status.LineToken.nextToken());
int C = Integer.parseInt(Status.LineToken.nextToken());
int D = Integer.parseInt(Status.LineToken.nextToken());
Status.WBO2S8Volts=((A*256)+B)/32768;
Status.WBO2S8VoltsEquivalenceRatio=((C*256)+D)/256 - 128;
Log.level1("WideBand Sensor 8 Voltage: " + Double.toString(Status.WBO2S8Volts));
Log.level1("WideBand Sensor 8 Volt EQR:" + Double.toString(Status.WBO2S8VoltsEquivalenceRatio));
Would it be wise to create a separate method to process the data since it is repetative? Or would it just be faster to execute it as a single method? I have several of these which would need to be rewritten and I am wondering if it would actually improve speed of execution or if it is just as good, or if there is a number of instructions where it becomes a good idea to create a new method.
Basically, what is faster or when does it become faster to use a single method to process objects versus using another method to process several like objects?
It seems like at runtime, pulling a new variable, then performing a math operation on it is quicker then creating a new method and then pulling a varible then performing a math operation on it. My question is really where the speed is at..
These methods are all called only to read data and set a Status.Variable. There are nearly 200 methods in my class which generate data.
The speed difference of invoking a piece of code inside a method or outside of it is negligible. Specially compared with using the right algorithm for the task.
I would recommend you to use the method anyway, not for performance but for maintainability. If you need to change one line of code which turn out to introduce a bug or something and you have this code segment copy/pasted in 50 different places, it would be much harder to change ( and spot ) than having it in one single place.
So, don't worry about the performance penalty introduced by using methods because, it is practically nothing( even better, the VM may inline some of the calls )
I think S. Lott's comment on your question probably hits the nail perfectly on the head - there's no point optimizing code until you're sure the code in question actually needs it. You'll most likely end up spending a lot of time and effort for next to no gain, otherwise.
I'll also second Support's answer, in that the difference in execution time between invoking a separate method and invoking the code inline is negligible (this was actually what I wanted to post, but he kinda beat me to it). It may even be zero, if an optimizing compiler or JIT decides to inline the method anyway (I'm not sure if there are any such compilers/JITs for Java, however).
There is one advantage of the separate method approach however - if you separate your data-processing code into a separate method, you could in theory achieve some increased performance by having that method called from a separate thread, thus decoupling your (possibly time-consuming) processing code from your other code.
I am curious about speed of execution and also speed of creation and destruction of objects.
Creation of objects in Java is fast enough that you shouldn't need to worry about it, except in extreme and unusual situations.
Destruction of objects in a modern Java implementation has zero cost ... unless you use finalizers. And there are very few situations that you should even think of using a finalizer.
Basically, what is faster or when does it become faster to use a single method to process objects versus using another method to process several like objects?
The difference is negligible relative to everything else that is going on.
As #S.Lott says: "Please don't micro-optimize". Focus on writing code that is simple, clear, precise and correct, and that uses the most appropriate algorithms. Only "micro" optimize when you have clear evidence of a critical bottleneck.

Categories