When not to use java8 streams? [closed] - java

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I was just trying out a few code snippets and came across an observation where using simple for loop gave me better performance results compared to using java 8 stream. Now, I may have missed something in understanding these things. I need help understanding the difference. Adding my code below.
//Following takes almost 3ms
public int[] testPerf(int[] nums, int[] index) {
List<Integer> arrayL = new ArrayList<>();
for(int i =0; i< index.length; i++){
arrayL.add(index[i], nums[i]);
}
return arrayL.stream().mapToInt(i -> i).toArray();
}
//Following takes almost 1ms
public int[] testPerf(int[] nums, int[] index) {
List<Integer> arrayL = new ArrayList<>();
for(int i =0; i< index.length; i++){
arrayL.add(index[i], nums[i]);
}
int [] result = new int[index.length];
for(int i =0; i< index.length; i++){
result[i] = arrayL.get(i);
}
return result;
}
EDIT: START
What am I trying to test? Injecting elements of nums at indexes specified by index array to form a final result.
Input: nums = [0,1,2,3,4], index = [0,1,2,2,1]
Output: [0,4,1,3,2]
Explanation:
nums index target
0 0 [0]
1 1 [0,1]
2 2 [0,1,2]
3 2 [0,1,3,2]
4 1 [0,4,1,3,2]
EDIT: END
Note that I have tested these with different inputs and tried arrays containing 2k+ elements but still the code with for loop gave me better performance.
Please guide what is it that makes the other code take more time?
Also point out to any references where I can learn when NOT to use java streams? (When not to overcomplicate? :) )

I use streams over loops (I use anything over loops), when I can, because
streams a usually much easier to write without bugs, even in absence of tests, because of their declarative uniform syntax. For the same reason they are usually much easier to read for a non-author, even without thorough commenting. Also, streams are lazy (think "pull"), for loops are eager (think "push"). And being lazy is usually better.
Unfortunately, streams have (sometimes substantial) overhead, so need to pay attention when writing performance critical code, especially with collectors.
So, something like "use streams when you can and loops where you must".

The goal of streams is to make the code compacter, better readable, to reduce boiler plate and thus to simplify the developer's work. The goal of streams is not to give performance gains. The performance in a specific case can differ from for. Also streams can take a bit more memory. But the result - how streams are converted to a byte code - can depend on JVM: Oracle JDK, Open JDK, IBM JDK, etc.
If you have strict requirements for memory and performance, then there is no ready answer. You should compare your productive code (not the example you show here) in both variants - with for and with streams, then choose what fits your strict requirements.
But...
But in the most applications now days the impact of the loops, no matter is it for or streams, on performance is very low compared to other operations like accessing database or calling some web services via network. Even if for would be x10 faster compared to streams, but database access takes 2 000 ms, you would not see real difference.
Where as difference in the code style can be essential. Developers can produce code that is better readable. When some developers in your team go to other projects and some new join your team and have to extend the existing code, they will need less time to understand such code. It is not that understanding for is hard :) But reading code with multiple streams feels to many developers naturals compared to the code with for.
There are a few special cases where streams are hard to apply, like loop over 2 or 3 collections simultaneously. In such case streams make not much sense, for is preferable.
But in general there are no rules when to use or not to use specific construct. You know that any while and do - while loop can be replaced with a for loop. But why Java has different types of loops? Because some of them can express the logic in particular cases in a better way that other the others. Same with streams. If you and your team don't feel comfortable with streams, you don't have to use them. Give them a try, use them over 3 months, then discuss within the team and decide if every developer in your project should use them as much as possible or should try to avoid.

Related

Java string concatenation optimisation is applied in this case?

Let's imagine I have a lib which contains the following simple method:
private static final String CONSTANT = "Constant";
public static String concatStringWithCondition(String condition) {
return "Some phrase" + condition + CONSTANT;
}
What if someone wants to use my method in a loop? As I understand, that string optimisation (where + gets replaced with StringBuilder or whatever is more optimal) is not working for that case? Or this is valid for strings initialised outside of the loop?
I'm using java 11 (Dropwizard).
Thanks.
No, this is fine.
The only case that string concatenation can be problematic is when you're using a loop to build one single string. Your method by itself is fine. Callers of your method can, of course, mess things up, but not in a way that's related to your method.
The code as written should be as efficient as making a StringBuilder and appending these 3 constants to it. There certainly is absolutely no difference at all between a literal ("Some phrase"), and an expression that the compiler can treat as a Compile Time Constant (which CONSTANT, here, clearly is - given that CONSTANT is static, final, not null, and of a CTCable type (All primitives and strings)).
However, is that 'efficient'? I doubt it - making a stringbuilder is not particularly cheap either. It's orders of magnitude cheaper than continually making new strings, sure, but there's always a bigger fish:
It doesn't matter
Computers are fast. Really, really fast. It is highly likely that you can write this incredibly badly (performance wise) and it still won't be measurable. You won't even notice. Less than a millisecond slower.
In general, anybody that worries about performance at this level simply lacks perspective and knowledge: If you apply that level of fretting to your java code and you have the knowledge to know what could in theory be non-perfectly-performant, you'll be sweating every 3rd character you ever type. That's no way to program. So, gain that perspective (or take it from me, "just git gud" is not exactly something you can do in a week - take it on faith for now, as you learn you can start verifying) - and don't worry about it. Unless you actually run into an actual situation where the code is slower than it feels like it could be, or slower than it needs to be, and then toss profilers and microbenchmark testing frameworks at it, and THEN, armed with all that information (and not before!), consider optimizing. The reports tell you what to optimize, because literally less than 1% of the code is responsible for 99% of the performance loss, so spending any time on code that isn't in that 1% is an utter waste of time, hence why you must get those reports first, or not start at all.
... or perhaps it does
But if it does matter, and it's really that 1% of the code that is responsible for 99% of the loss, then usually you need to go a little further than just 'optimize the method'. Optimize the entire pipeline.
What is happening with this string? Take that into consideration.
For example, let's say that it, itself, is being appended to a much bigger stringbuilder. In which case, making a tiny stringbuilder here is incredibly inefficient compared to rewriting the method to:
public static void concatStringWithCondition(StringBuilder sb, String condition) {
sb.append("Some phrase").append(condition).append(CONSTANT);
}
Or, perhaps this data is being turned into bytes using UTF_8 and then tossed onto a web socket. In that case:
private static final byte[] PREFIX = "Some phrase".getBytes(StandardCharsets.UTF_8);
private static final byte[] SUFFIX = "Some Constant".getBytes(StandardCharsets.UTF_8);
public void concatStringWithCondition(OutputStream out, String condition) {
out.write(PREFIX);
out.write(condition.getBytes(StandardCharsets.UTF_8));
out.write(SUFFIX);
}
and check if that outputstream is buffered. If not, make it buffered, that'll help a ton and would completely dwarf the cost of not using string concatenation. If the 'condition' string can get quite large, the above is no good either, you want a CharsetEncoder that encodes straight to the OutputStream, and may even want to replace all that with some ByteBuffer based approach.
Conclusion
Assume performance is never relevant until it is.
IF performance truly must be tackled, strap in, it'll take ages to do it right. Doing it 'wrong' (applying dumb rules of thumb that do not work) isn't useful. Either do it right, or don't do it.
IF you're still on bard, always start with profiler reports and use JMH to gather information.
Be prepared to rewrite the pipeline - change the method signatures, in order to optimize.
That means that micro-optimizing, which usually sacrifices nice abstracted APIs, is actively bad for performance - because changing pipelines is considerably more difficult if all code is micro-optimized, given that this usually comes at the cost of abstraction.
And now the circle is complete: Point 5 shows why the worrying about performance as you are doing in this question is in fact detrimental: It is far too likely that this worry results in you 'optimizing' some code in a way that doesn't actually run faster (because the JVM is a complex beast), and even if it did, it is irrelevant because the code path this code is on is literally only 0.01% or less of the total runtime expenditure, and in the mean time you've made your APIs worse and lack abstraction which would make any actually useful optimization much harder than it needs to be.
But I really want rules of thumb!
Allright, fine. Here are 2 easy rules of thumb to follow that will lead to better performance:
When in rome...
The JVM is an optimising marvel and will run the craziest code quite quickly anyway. However, it does this primarily by being a giant pattern matching machine: It finds recognizable code snippets and rewrites these to the fastest, most carefully tuned to juuust your combination of hardware machine code it can. However, this pattern machine isn't voodoo magic: It's got limited patterns. Which patterns do JVM makers 'ship' with their JVMs? Why, the common patterns, of course. Why include a pattern for exotic code virtually nobody ever writes? Waste of space.
So, write code the way java programmers tend to write it. Which very much means: Do not write crazy code just because you think it might be faster. It'll likely be slower. Just follow the crowd.
Trivial example:
Which one is faster:
List<String> list = new ArrayList<String>();
for (int i = 0; i < 10000; i++) list.add(someRandomName());
// option 1:
String[] arr = list.toArray(new String[list.size()]);
// option 2:
String[] arr = list.toArray(new String[0]);
You might think, obviously, option 1, right? Option 2 'wastes' a string array, making a 0-length array just to toss it in the garbage right after. But you'd be wrong: Option 2 is in fact faster (if you want an explanation: The JVM recognizes it, and does a hacky move: It makes an new string array that does not need to be initialized with all zeroes first. Normal java code cannot do this (arrays are neccessarily initialized blank, to prevent memory corruption issues), but specifically .toArray(new X[0])? Those pattern matching machines I told you about detect this and replace it with code that just blits the refs straight into a patch of memory without wasting time writing zeroes to it first.
It's a subtle difference that is highly unlikely to matter - it just highlights: Your instincts? They will mislead you every time.
Fortunately, .toArray(new X[0]) is common java code. And easier and shorter. So just write nice, convenient code that looks like how other folks write and you'd have gotten the right answer here. Without having to know such crazy esoterics as having to reason out how the JVM needs to waste time zeroing out that array and how hotspot / pattern matching might possibly eliminate this, thus making it faster. That's just one of 5 million things you'd have to know - and nobody can do that. Thus: Just write java code in simple, common styles.
Algorithmic complexity is a thing hotspot can't fix for you
Given an O(n^3) algorithm fighting an O(log(n) * n^2) algorithm, make n large enough and the second algorithm has to win, that's what big O notation means. The JVM can do a lot of magic but it can pretty much never optimize an algorithm into a faster 'class' of algorithmic complexity. You might be surprised at the size n has to be before algorithmic complexity dominates, but it is acceptable to realize that your algorithm can be fundamentally faster and do the work on rewriting it to this more efficient algorithm even without profiler reports and benchmark harnesses and the like.

Is use of AtomicInteger for indexing in Stream a legit way?

I would like to get an answer pointing out the reasons why the following idea described below on a very simple example is commonly considered bad and know its weaknesses.
I have a sentence of words and my goal is to make every second one to uppercase. My starting point for both of the cases is exactly the same:
String sentence = "Hi, this is just a simple short sentence";
String[] split = sentence.split(" ");
The traditional and procedural approach is:
StringBuilder stringBuilder = new StringBuilder();
for (int i=0; i<split.length; i++) {
if (i%2==0) {
stringBuilder.append(split[i]);
} else {
stringBuilder.append(split[i].toUpperCase());
}
if (i<split.length-1) { stringBuilder.append(" "); }
}
When want to use java-stream the use is limited due the effectively-final or final variable constraint used in the lambda expression. I have to use the workaround using the array and its first and only index, which was suggested in the first comment of my question How to increment a value in Java Stream. Here is the example:
int index[] = {0};
String result = Arrays.stream(split)
.map(i -> index[0]++%2==0 ? i : i.toUpperCase())
.collect(Collectors.joining(" "));
Yeah, it's a bad solution and I have heard few good reasons somewhere hidden in comments of a question I am unable to find (if you remind me some of them, I'd upvote twice if possible). But what if I use AtomicInteger - does it make any difference and is it a good and safe way with no side effects compared to the previous one?
AtomicInteger atom = new AtomicInteger(0);
String result = Arrays.stream(split)
.map(i -> atom.getAndIncrement()%2==0 ? i : i.toUpperCase())
.collect(Collectors.joining(" "));
Regardless of how ugly it might look for anyone, I ask for the description of possible weaknesses and their reasons. I don't care the performance but the design and possible weaknesses of the 2nd solution.
Please, don't match AtomicInteger with multi-threading issue. I used this class since it receives, increments and stores the value in the way I need for this example.
As I often say in my answers that "Java Stream-API" is not the bullet for everything. My goal is to explore and find the edge where is this sentence applicable since I find the last snippet quite clear, readable and brief compared to StringBuilder's snippet.
Edit: Does exist any alternative way applicable for the snippets above and all the issues when it’s needed to work with both item and index while iteration using Stream-API?
The documentation of the java.util.stream package states that:
Side-effects in behavioral parameters to stream operations are, in general, discouraged, as they can often lead to unwitting violations of the statelessness requirement, as well as other thread-safety hazards.
[...]
The ordering of side-effects may be surprising. Even when a pipeline is constrained to produce a result that is consistent with the encounter order of the stream source (for example, IntStream.range(0,5).parallel().map(x -> x*2).toArray() must produce [0, 2, 4, 6, 8]), no guarantees are made as to the order in which the mapper function is applied to individual elements, or in what thread any behavioral parameter is executed for a given element.
This means that the elements may be processed out of order, and thus the Stream-solutions may produce wrong results.
This is (at least for me) a killer argument against the two Stream-solutions.
By the process of elimination, we only have the "traditional solution" left. And honestly, I do not see anything wrong with this solution. If we wanted to get rid of the for-loop, we could re-write this code using a foreach-loop:
boolean toUpper = false; // 1st String is not capitalized
for (String word : splits) {
stringBuilder.append(toUpper ? word.toUpperCase() : word);
toUpper = !toUpper;
}
For a streamified and (as far as I know) correct solution, take a look at Octavian R.'s answer.
Your question wrt. the "limits of streams" is opinion-based.
The answer to the question (s) ends here. The rest is my opinion and should be regarded as such.
In Octavian R.'s solution, an artificial index-set is created through a IntStream, which is then used to access the String[]. For me, this has a higher cognitive complexity than a simple for- or foreach-loop and I do not see any benefit in using streams instead of loops in this situation.
In Java, comparing with Scala, you must be inventive. One solution without mutation is this one:
String sentence = "Hi, this is just a simple short sentence";
String[] split = sentence.split(" ");
String result = IntStream.range(0, split.length)
.mapToObj(i -> i%2==0 ? split[i].toUpperCase():split[i])
.collect(Collectors.joining(" "));
System.out.println(result);
In Java streams you should avoid the mutation. Your solution with AtomicInteger it's ugly and it's a bad practice.
Kind regards!
As explained in Turing85’s answer, your stream solutions are not correct, as they rely on the processing order, which is not guaranteed. This can lead to incorrect results with parallel execution today, but even if it happens to produce the desired result with a sequential stream, that’s only an implementation detail. It’s not guaranteed to work.
Besides that, there is no advantage in rewriting code to use the Stream API with a logic that basically still is a loop, but obfuscated with a different API. The best way to describe the idea of the new APIs, is to say that you should express what to do but not how.
Starting with Java 9, you could implement the same thing as
String result = Pattern.compile("( ?+[^ ]* )([^ ]*)").matcher(sentence)
.replaceAll(m -> m.group(1)+m.group(2).toUpperCase());
which expresses the wish to replace every second word with its upper case form, but doesn’t express how to do it. That’s up to the library, which likely uses a single StringBuilder instead of splitting into an array of strings, but that’s irrelevant to the application logic.
As long as you’re using Java 8, I’d stay with the loop and even when switching to a newer Java version, I would consider replacing the loop as not being an urgent change.
The pattern in the above example has been written in a way to do exactly the same as your original code splitting at single space characters. Usually, I’d encode “replace every second word” more like
String result = Pattern.compile("(\\w+\\W+)(\\w+)").matcher(sentence)
.replaceAll(m -> m.group(1)+m.group(2).toUpperCase());
which would behave differently when encountering multiple spaces or other separators, but usually is closer to the actual intention.

Optimising a for loop for adding elements in an arraylist [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I have a loop in which n is very large. I have an arraylist alist which goes on adding elements one by one after each subsequent run of loop. Is there any method of optimising this loop so that code runs faster or any other method to add elements parallely.
Thanks in advance.
for(int i=0;i<=n;i++)
{
/ *
some code
*/
alist.add(element);
}
Actually doing it in parallel (on multiple threads) would likely be slower as they'd all be in contention, since you have to serialize access to the ArrayList.
About the only thing you can realistically do (based on what little information is in the question) is ensure that the ArrayList has enough capacity at the outset for all of the elements you're going to add, so it doesn't have to do a bunch of reallocations as you go. Assuming you already have an alist, you do this by calling ensureCapacity:
alist.ensureCapacity(alist.size() + n + 1); // +1 because you're adding n+1 elements
for (int i = 0; i <= n; i++) { // Note this loops n+1 times
alist.add(/*...some element...*/);
}
If you don't already have alist, you do it by giving an argument to the constructor:
alist = new ArrayList(n + 1);
for (int i = 0; i <= n; i++) {
If you don't do that, the ArrayList may have to reallocate the array repeatedly during the loop, which is slower than doing it once at the outset.
If you already have a collection of some kind and are adding all elements from that collection to alist, rather than doing it yourself, you can use addAll. But that basically just does ensureCapacity followed by a bunch of adds.
When you instantiate the array list, make sure you use the single argument constructor:
ArrayList</*your type*/>(n + 1)
This will set the capacity of the ArrayList to the required number of elements. Doing so prevents memory reallocations which will help runtime performance. It doesn't restrict the ArrayList capacity but advises the object that it could get that big.
As for further optimisations, that would really depend on whether or not /*some code*/ is the bottleneck; which it probably is.
If you are using an ArrayList then presumably order is important. In that case you really can't use conventional techniques to add elements in different threads. They will still end up having to be processed in serial to be added in correct order.
Depending on your application it may be possible to use Java 8 threads. If each of the elements can be processed in parallel then it might be possible to perform each of the steps independently and then collect the results into a list. Something like:
List<Element> result = IntStream.range(0, n).parallel()
.mapToObj(n -> codeCreatingElement(n)).collect(Collectors.toList());

Code design: performance vs maintainability [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Contextualisation
Im am implementing a bytecode instrumenter using the soot framework in a testing context and I want to know which design is better.
I am building the TraceMethod object for every Method in a Class that I am instrumenting and I want to run this instrumenter on multiple Classes.
Which Option offers more performance(Space–time)?
Option 1: (Maps)
public class TraceMethod {
boolean[] decisionNodeList;
boolean[] targetList;
Map<Integer,List<Integer>> dependenciesMap;
Map<Integer,List<Double>> decisionNodeBranchDistance;
}
Option 2: (Objects)
public class TraceMethod {
ArrayList<Target> targets = new ArrayList<Target>();
ArrayList<DecisionNode> decisionNodes = new ArrayList<DecisionNode>();
}
public class DecisionNode {
int id;
Double branchDistance;
boolean reached;
}
public class Target {
int id;
boolean reached;
List<DecisionNode> dependencies;
}
I have implemented the option 2 by myself, but my boss suggest me the option 1 and he argue that is "lighter". I saw that in this article "Class Object vs Hashmap" that HashMaps use more memory than Objects, but im still not convinced that my solution(option 2) is better.
Its a simple detail but i want to be sure that I am using the optimal solution, my concern is about performance(Space–time). I know that the second option are way better in term of maintainability but i can sacrifice that if its not optimal.
In general you should always go for maintenance, and not for supposed performance. There are few good reasons for this:
We tend to be fascinated by speed difference between array vs HashMap, but in real enterprise application these differences are not big enough to account in visible difference in application speed.
Most common bottlenecks in application are in either database or network.
JVM optimizes code to some extent
It is very unlikely that your application will have performance issues due to maintainable code. More likely case is your boss will run out of money when you will have millions lines of unmaintainable code .
Approach 1 has the potentical to be much faster and uses less space.
Especially for a byte code instrumenter, I would first implement approach 1.
And then when it works, replace both Lists with non generic lists that use primitive types instead of the Integer and Double object.
Note that an int needs 4 bytes while an Integer (Object) need 16 - 20 bytes, depending on the machine (16 at PC, 20 at android).
The List can be replaced with GrowingIntArray (I have found that in an statistic package of Apache if I remeber correctly) which uses primitive ints. (Or maybe just replaced by an int[] once you know that the content cannot change anymore)
Then you just write your own GrowingDoubleArray (or use double[])
Remember Collections are handy but slower.
Objects use 4 times more space than primitives.
A byte code instrumenter needs performance, it is not a software that is run once a week.
Finally I would not replace that Maps with non generic ones, that seems
for me to much work. But you may try it as last step.
As a final optimization step: look how many elements are in your lists or maps. If that are usually less than 16 (you have to try that out), you may switch to a linear search,
which is the fastest, for a very low number of elements.
You even can make your code intelligent to switch the search algorithms once the number of elements exceed a specific number.
(Sun/Oracle java does this, and Apple/ios, to) in some of their Collections.
However this last step will make you code much more complex.
Space as an exmample:
DecisionNode: 16 for the class + 4 (id) + 20 (Double) +4 (boolean) = 44 + 4 padding to then next multiple of 8 = 48 bytes.

i++ or i-- in a for loop? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
While writing a for loop where both start and end conditions are known, which way is better? Let's say I have to iterate a loop for addition of an array elements of size 5. In this case which one of the following would be more efficient as far as execution time is concerned? Which one will give better performance?
for (i = 0; i < 5; i++)
{
/* logic */
}
OR
for (i = 4; i >= 0; i--)
{
/* logic */
}
Apart from the difficulty in writing i = 5 - 1; that is i = 4;, are there any other considerations?
It's usually recommended to concentrate on making code as clear and as logical as possible, without worrying about micro-optimizations or other factors. In your case, the first one is the better choice, since most programmers are more used to traverse an array in that order.
Both versions will have the same result (given that they're implemented correctly) and will have exactly the same run time.
EDIT: #Zane mentioned in a comment that looping backwards to zero was faster some time ago. It was, the reason for it was that comparing a variable to zero was faster. Given that computers were much much slower those days, such optimizations were encouraged. Those days are indeed over...
There is something wrong in your code.
The first loop is fine but the second while never execute:
it runs for 0 times. It should be
for(i=4;i>=0;i--){}
Besides, if you ask which is better, its your choice, with which one you are comfortable with.
For me, I feel the first one to be more comfortable.
In most cases it wouldn't matter, however there are some situations where non-obvious side-effects might interfere.
Consider a loop:
for(int i = 0; i < strlen(str); i++) {/* do stuff on i-th elem */}.
Here on each iteration the strlen(str) will be reevaluated (unless optimized by compiler) even though it's completely unnecessary; the programmer most likely didn't even consider this.
It might be worth replacing the loop with:
for(int i = strlen(str); i > 0; i--) {/* do stuff on i-th elem */}.
Here length of the string will be evaluated only once.
Of course, in the first loop the problem can be avoided as well by using additional variable to hold the length of the string but it's just an unnecessary noise, not related to the program logic.
The most obvious answer is: which one has the semantics you want? They
visit the objects in a different order.
As a general rule, if there are no other considerations, people expect
ascending order, and this is what you should use when visiting objects.
In C++, it is far more idiomatic to use iterators for this. Normal
iterators visit in ascending order, reverse iterators in descending. If
you don't explicitly need descending, you should use normal iterators.
This is what people expect, and when you do use reverse iterators, the
first thing a reader will ask is why. Also, I haven't measured, but it
wouldn't surprise me if normal iterators were faster than reverse
iterators. In Java, iterators are also idiomatic, and you don't have
reverse iterators.
If I do need descending order when visiting, I'll use a while loop (if I
don't have reverse iterators, which do it for me); I find something
like:
int index = numberOfElements;
while ( index != 0 ) {
-- index;
// ...
}
far more readable (and easier to get right) than any of the
alternatives.
If you're not visiting objects, but just counting, descending order
seems more natural to me: the control variable contains the number of
times left. And since the count is never used as an index, there's no
problem with the fact that it would be one off as an index, and you can
use a traditional for.
for ( int count = numberOfTimes; count != 0; -- count ) {
// ...
}
But it's really a question of style; I've seen a lot of ascending loops
for this as well.
The incremental for loop or decremented for loop is opted based on the way you want to use the counter variable or how good it looks
if you are accessing some array in ascending order, decremented for loop will be used
for (i = 0; i < 5; i++)
{
arr[i];
}
if you are accessing some array or list in descending order, incremental for loop is used
for (i = 5; i > 0 ; i--)
{
arr[i-1];
}
if the counter number has no significance for the value that is accessed, then readability of code is looked on. And incremental for loop looks more eye pleasing.
I would say the loop with i++ is easier to understand. Also, going backwards can make a suboptimal use of the processor cache, but usually compilers/ virtual machines are smarter than that.
I believe most programmers would be able to understand your code more quickly using the first method (i++). Unless you have the need to process an array in reverse I would stick with your first method. As for performance, I believe there would be little or no benefit to either solution.
Also you may want to consider using the for..each (enhanced for) syntax, which is quite tidier.
int[] x = {1,2,3,4,5};
for(int y: x){
System.out.println(y);
}

Categories