1. Suppose following situations.
First:
int x;
for (int i = 0; i < MAX; i++){
// some magic with x
System.out.println(x);
}
Second:
for (int i = 0; i < MAX; i++){
int x;
// some magic with x
System.out.println(x);
}
So which piece of code is better and more efficient?
2. Another two situations.
First:
int size = array.length;
for (int i = 0; i < size; i++) {
// do something with array[i]
}
Second:
for (int i = 0; i < array.length; i++) {
// do something with array[i]
}
Is it more efficient to save array length to variable?
length is just a property in the array object, it doesn't take any time in getting the length.
It is the same as reading from a variable it doesn't need to loop over the whole array to get the length.
And the first two are just scopes.
Local - only available within the loop.
Global - available throught.
Assume you want remainders of numbers when divided by 2.
for(int i = 0; i < 10; ++i)
{
int remainder = i % 2;
System.out.println(remainder);
}
And assume calculating the sum of first 10 natural numbers.
int sum = 0;
for(int i = 0; i <= 10; ++i)
{
//declaring sum here doesnt make sense
sum += i;
}
System.out.println(sum);//sum is available here.
PS: you could just the sum of n natural numbers formula.
1. What you should be more concerned with here, is not efficiency, but scope. Generally, you should strive to keep your variables as locally scoped as possible. This means, if you only need x within the loop, you should define it within the loop.
You get a number of benefits with keeping your variables as locally scoped as possible:
Your code will be much more readable to someone else
You won't accidentally assign to, or use the value of a variable you defined further up in your code that is still in scope, thus minimizing errors in your program
Most importantly, the garbage collector will free up any memory used by the variable as soon as it goes out of scope, keeping your program's performance high, and memory usage low.
You can read up more on variable scope and best practices from Josh Bloch's excellent book, "Effective Java" (scope is discussed in items 13 and 45). You might also want to read item 55, which discusses why it is important to optimize judiciously.
2. For the second part of your question, see The Skeet's answer here.
Here's an example:
public static void main(String[] args) {
for(int i=0; i<getSize(); i++) {
System.out.println("i: " + i);
}
}
private static int getSize() {
int size = new Random().nextInt(10);
System.out.println("size: " + size);
return size;
}
This outputs:
size: 2
i: 0
size: 4
i: 1
size: 4
i: 2
size: 8
i: 3
size: 0
Notice how getSize() is called for every iteration of the loop. In your example, calling .length won't make a huge difference, as the JIT runtime will know how to optimize this call. But imagine getSize() was a more complex operation, like counting the number of rows in a database table. Your code will be super slow as every iteration of the loop will call getSize(), resulting in a database roundtrip.
This is when you would be better off evaluating the value before hand. You can do this and still retain minimal scope for size, like this:
public static void main(String[] args) {
for(int size = getSize(), i=0; i<size; i++) {
System.out.println("i: " + i);
}
}
private static int getSize() {
int size = new Random().nextInt(10);
System.out.println("size: " + size);
return size;
}
size: 5
i: 0
i: 1
i: 2
i: 3
i: 4
Notice how getSize() is only called once, and also, the size variable is only available inside the loop and goes out of scope as soon as the loop completes.
Related
I was wondering if this is the most efficient or even good code practice to add arrays to a single array as far as my knowllage goes its time would be O(n). This is only for practice and I want to do it for int [] not for the code to be changed so it is a List.
static int[] allArrayDirections(int row[], int col [], int diag []) {
int counter = 0;
int allDirectionsInMatrix [] = new int [row.length + col.length + diag.length];
for(int i = 0; i < row.length; i++) {
allDirectionsInMatrix[counter++] = row[i];
}
for(int j = 0; j < col.length; j++) {
allDirectionsInMatrix[counter++] = col[j];
}
for(int i = 0; i < diag.length; i++) {
allDirectionsInMatrix[counter++] = diag[i];
}
return allDirectionsInMatrix;
}
You could compare your linear solution to a sort of unravelled:
Find the longest of the three arrays, take its length as counter boundary
assign the longest array to a new variable first, the other two to second and third – just fiddling with the references so the for looks straightforward
loop once from 0 to the counter boundary
fill your target array in three steps, using the other array's length as offset – unless the smaller arrays are already exhausted, so skip them
This will need some add operation for the offset calculation to write to allDirectionsInMatrix and two greater then checks. Depending on the VM's optimization/ array lengths/ call frequency this might cut it in half.
The single for-loop looks similar to this:
// assuming first.length >= second.length >= third.length;
for(int i=0;i<largestLength;i++) {
allDirectionsInMatrix[i]=first[i];
if (second.length > i)
allDirectionsInMatrix[i+first.length]=second[i];
// I assume when called often enough VM does auto trickery
// with the repeated addition of i and first.length
if (third.length > i)
allDirectionsInMatrix[i+first.length+second.length]=third[i];
}
But this might also break other VM optimizations when treating the arrays independently. So really compare runtime. I'd appreciate to read about your measurements.
Just FYI (and not a reals answer to your question, but maybe interesting for comparison)
One way to concatenate integer arrays (using Streams) would be
int[] a = {1,2,3};
int[] b = {4,5,6,7};
int[] c = IntStream.concat(Arrays.stream(a), Arrays.stream(b)).toArray();
System.out.println(Arrays.toString(c));
(Stream.concat for arrays of other types)
In my java program I have a for-loop looking roughly like this:
ArrayList<MyObject> myList = new ArrayList<MyObject>();
putThingsInList(myList);
for (int i = 0; i < myList.size(); i++) {
doWhatsoever();
}
Since the size of the list isn't changing, I tried to accelerate the loop by replacing the termination expression of the loop with a variable.
My idea was: Since the size of an ArrayList can possibly change while iterating it, the termination expression has to be executed each loop cycle. If I know (but the JVM doesn't), that its size will stay constant, the usage of a variable might speed things up.
ArrayList<MyObject> myList = new ArrayList<MyObject>();
putThingsInList(myList);
int myListSize = myList.size();
for (int i = 0; i < myListSize; i++) {
doWhatsoever();
}
However, this solution is slower, way slower; also making myListSize final doesn't change anything to that! I mean I could understand, if the speed didn't change at all; because maybe JVM just found out, that the size doesn't change and optimized the code. But why is it slower?
However, I rewrote the program; now the size of the list changes with each cycle: if i%2==0, I remove the last element of the list, else I add one element to the end of the list. So now the myList.size() operation has to be called within each iteration, I guessed.
I don't know if that's actually correct, but still the myList.size() termination expression is faster than using just a variable that remains constant all the time as termination expression...
Any ideas why?
Edit (I'm new here, I hope this is the way, how to do it)
My whole test program looks like this:
ArrayList<Integer> myList = new ArrayList<Integer>();
for (int i = 0; i < 1000000; i++)
{
myList.add(i);
}
final long myListSize = myList.size();
long sum = 0;
long timeStarted = System.nanoTime();
for (int i = 0; i < 500; i++)
{
for (int j = 0; j < myList.size(); j++)
{
sum += j;
if (j%2==0)
{
myList.add(999999);
}
else
{
myList.remove(999999);
}
}
}
long timeNeeded = (System.nanoTime() - timeStarted)/1000000;
System.out.println(timeNeeded);
System.out.println(sum);
Performance of the posted code (average of 10 executions):
4102ms for myList.size()
4230ms for myListSize
Without the if-then-else statements (so with constant myList size)
172ms for myList.size()
329ms for myListSize
So the speed different of both versions is still there. In the version with the if-then-else parts the percentaged differences are of course smaller because a lot of the time is invested for the add and remove operations of the list.
The problem is with this line:
final long myListSize = myList.size();
Change this to an int and lo and behold, running times will be identical. Why? Because comparing an int to a long for every iteration requires a widening conversion of the int, and that takes time.
Note that the difference also largely (but probably not completely) disappears when the code is compiled and optimised, as can be seen from the following JMH benchmark results:
# JMH 1.11.2 (released 7 days ago)
# VM version: JDK 1.8.0_51, VM 25.51-b03
# VM options: <none>
# Warmup: 20 iterations, 1 s each
# Measurement: 20 iterations, 1 s each
# Timeout: 10 min per iteration
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Throughput, ops/time
...
# Run complete. Total time: 00:02:01
Benchmark Mode Cnt Score Error Units
MyBenchmark.testIntLocalVariable thrpt 20 81892.018 ± 734.621 ops/s
MyBenchmark.testLongLocalVariable thrpt 20 74180.774 ± 1289.338 ops/s
MyBenchmark.testMethodInvocation thrpt 20 82732.317 ± 749.430 ops/s
And here's the benchmark code for it:
public class MyBenchmark {
#State( Scope.Benchmark)
public static class Values {
private final ArrayList<Double> values;
public Values() {
this.values = new ArrayList<Double>(10000);
for (int i = 0; i < 10000; i++) {
this.values.add(Math.random());
}
}
}
#Benchmark
public double testMethodInvocation(Values v) {
double sum = 0;
for (int i = 0; i < v.values.size(); i++) {
sum += v.values.get(i);
}
return sum;
}
#Benchmark
public double testIntLocalVariable(Values v) {
double sum = 0;
int max = v.values.size();
for (int i = 0; i < max; i++) {
sum += v.values.get(i);
}
return sum;
}
#Benchmark
public double testLongLocalVariable(Values v) {
double sum = 0;
long max = v.values.size();
for (int i = 0; i < max; i++) {
sum += v.values.get(i);
}
return sum;
}
}
P.s.:
My idea was: Since the size of an ArrayList can possibly change while
iterating it, the termination expression has to be executed each loop
cycle. If I know (but the JVM doesn't), that its size will stay
constant, the usage of a variable might speed things up.
Your assumption is wrong for two reasons: first of all, the VM can easily determine via escape analysis that the list stored in myList doesn't escape the method (so it's free to allocate it on the stack for example).
More importantly, even if the list was shared between multiple threads, and therefore could potentially be modified from the outside while we run our loop, in the absence of any synchronization it is perfectly valid for the thread running our loop to pretend those changes haven't happened at all.
As always, things are not always what they seem...
First things first, ArrayList.size() doesn't get recomputed on every invocation, only when the proper mutator is invoked. So calling it frequently is quite cheap.
Which of these loops is the fastest?
// array1 and array2 are the same size.
int sum;
for (int i = 0; i < array1.length; i++) {
sum += array1[i];
}
for (int i = 0; i < array2.length; i++) {
sum += array2[i];
}
or
int sum;
for (int i = 0; i < array1.length; i++) {
sum += array1[i];
sum += array2[i];
}
Instinctively, you would say that the second loop is the fastest since it doesn't iterate twice. However, some optimizations actually cause the first loop to be the fastest depending, for instance, on memory walking strides that cause a lot of memory cache misses.
Side-note: this compiler optimization technique is called loop
jamming.
This loop:
int sum;
for (int i = 0; i < 1000000; i++) {
sum += list.get(i);
}
is not the same as:
// Assume that list.size() == 1000000
int sum;
for (int i = 0; i < list.size(); i++) {
sum += list.get(i);
}
In the first case, the compile absolutely knows that it must iterate a million times and puts the constant in the Constant Pool, so certain optimizations can take place.
A closer equivalent would be:
int sum;
final int listSize = list.size();
for (int i = 0; i < listSize; i++) {
sum += list.get(i);
}
but only after the JVM has figured out what the value of listSize is. The final keyword gives the compiler/run-time certain guarantees that can be exploited. If the loop runs long enough, JIT-compiling will kick in, making execution faster.
Because this sparked interest in me I decided to do a quick test:
public class fortest {
public static void main(String[] args) {
long mean = 0;
for (int cnt = 0; cnt < 100000; cnt++) {
if (mean > 0)
mean /= 2;
ArrayList<String> myList = new ArrayList<String>();
putThingsInList(myList);
long start = System.nanoTime();
int myListSize = myList.size();
for (int i = 0; i < myListSize; i++) doWhatsoever(i, myList);
long end = System.nanoTime();
mean += end - start;
}
System.out.println("Mean exec: " + mean/2);
}
private static void doWhatsoever(int i, ArrayList<String> myList) {
if (i % 2 == 0)
myList.set(i, "0");
}
private static void putThingsInList(ArrayList<String> myList) {
for (int i = 0; i < 1000; i++) myList.add(String.valueOf(i));
}
}
I do not see the kind of behavior you are seeing.
2500ns mean execution time over 100000 iterations with myList.size()
1800ns mean execution time over 100000 iterations with myListSize
I therefore suspect that it's your code that is executed by the functions that is at fault. In the above example you can sometimes see faster execution if you only fill the ArrayList once, because doWhatsoever() will only do something on the first loop. I suspect the rest is being optimized away and significantly drops execution time therefore. You might have a similar case, but without seeing your code it might be close to impossible to figure that one out.
There is another way to speed up the code using for each loop
ArrayList<MyObject> myList = new ArrayList<MyObject>();
putThingsInList(myList);
for (MyObject ob: myList) {
doWhatsoever();
}
But I agree with #showp1984 that some other part is slowing the code.
This question already has an answer here:
How to iterate through array combinations with constant sum efficiently?
(1 answer)
Closed 9 years ago.
I have 12 products at a blend plant (call them a - l) and need to generate varying percentages of them, the total obviously adding up to 100%.
Something simple such as the code below will work, however it is highly inefficient. Is there a more efficient algorithm?
*Edit: As mentioned below there are just too many possibilities compute, efficiently or not. I will change this to only having a maximum of 5 or the 12 products in a blend and then running it against the number of ways that 5 products can be chosen from the 12 products.
There is Python code that some of you have pointed to that seems to work out the possibilities from the combinations. However my Python is minimal (ie 0%), would one of you be able to explain this in Java terms? I can get the combinations in Java (http://www.cs.colostate.edu/~cs161/Fall12/lecture-codes/Subsets.java)
public class Main {
public static void main(String[] args) throws FileNotFoundException, UnsupportedEncodingException {
for(int a=0;a<=100;a++){
for(int b=0;b<=100;b++){
for(int c=0;c<=100;c++){
for(int d=0;d<=100;d++){
for(int e=0;e<=100;e++){
for(int f=0;f<=100;f++){
for(int g=0;g<=100;g++){
for(int h=0;h<=100;h++){
for(int i=0;i<=100;i++){
for(int j=0;j<=100;j++){
for(int k=0;k<=100;k++){
for(int l=0;l<=100;l++){
if(a+b+c+d+e+f+g+h+i+j+k+l==100)
{
System.out.println(a+" "+b+" "+c+" "+d+" "+e+" "+f+" "+g+" "+h+" "+i+" "+j+" "+k+" "+l);
}}}}}}}}}}}}}
}
}
Why make it so difficult. Think simple way.
To explain the scenario simpler, consider 5 numbers to be generated randomly. Pseudo-code should be something like below.
Generate 5 random number, R1, R2 ... R5
total = sum of those 5 random number.
For all item to produce
produce1 = R1/total; // produce[i] = R[i]/total;
Please, don't use nested for loops that deep! Use recursion instead:
public static void main(String[] args) {
int N = 12;
int goal = 100;
generate(N, 0, goal, new int[N]);
}
public static void generate(int i, int sum, int goal, int[] result) {
if (i == 1) {
// one number to go, so make it fit
result[0] = goal - sum;
System.out.println(Arrays.toString(result));
} else {
// try all possible values for this step
for (int j = 0; j < goal - sum; j++) {
// set next number of the result
result[i-1] = j;
// go to next step
generate(i-1, sum + j , goal, result);
}
}
}
Note that I only tested this for N = 3 and goal = 5. It absolutely makes no sense to try generating all these possibilities (and would take forever to compute).
Let's take your comment that you can only have 5 elements in a combination, and the other 7 are 0%. Try this:
for (i = 0; i < (1<<12); ++i) {
if (count_number_of_1s(i) != 5) { continue; }
for (j = 0; j < 100000000; ++j) {
int [] perc = new int[12];
int val = j;
int sum = 0;
int cnt = 0;
for (k = 0; k < 12; ++k) {
if (i & (1 << k)) {
cnt++;
if (cnt == 5) {
perc[k] = 100 - sum;
}
else {
perc[k] = val % 100;
val /= 100;
}
sum += perc[k];
if (sum > 100) { break; }
}
else { perc[k] = 0; }
}
if (sum == 100) {
System.out.println(perc[0] + ...);
}
}
}
The outer loop iterates over all possible combinations of using 12 items. You can do this by looping over all numbers from 1:2^12, and the 1s in the binary representation of that number are the elements you're using. The count_number_of_1s is a function that loops over all the bits in the parameter and returns the number of 1s. If this is not 5, then just skip this iteration because you said you only want at most 5 mixed. (There are 792 such cases).
The j loop is looping over all the combinations of 4 (not 5) items from 0:100. There are 100^4 such cases.
The inner loop is looping over all 12 variables, and for those that have a 1 in their bit-position in i, then it means you're using that one. You compute the percentage by taking the next two decimal digits from j. For the 5th item (cnt==5), you don't take digits, you compute it by subtracting from 100.
This will take a LONG time (minutes), but it won't be nearly as bad as 12 nested loops.
for(int a=0;a<=100;a++){
for(int b=0;b<=50;b++){
for(int c=0;c<=34;c++){
for(int d=0;d<=25;d++){
for(int e=0;e<=20;e++){
for(int f=0;f<=17;f++){
for(int g=0;g<=15;g++){
for(int h=0;h<=13;h++){
for(int i=0;i<=12;i++){
for(int j=0;j<=10;j++){
for(int k=0;k<=10;k++){
for(int l=0;l<=9;l++){
if(a+b+c+d+e+f+g+h+i+j+k+l==100)
{
// run 12 for loops for arranging the
// 12 obtained numbers at all 12 places
}}}}}}}}}}}}}
In Original approach(permutation based), the iterations were 102^12 = 1.268e24. Even though the 102th iteration was false, it did check the loop terminating condition for 102th time.
So you had 102^12 condition checks in "for" loops, in addition to "if" condition checks 101^12 times, so in total, 2.4e24 condition checks.
In my solution(combination based),No of for loop checks reduces to 6.243e15 for outer 12 loops, &
if condition checks = 6.243e15.
Now, the no of for loops(ie inner 12 for loops) for every true "if" condition, is 12^12 = 8.9e12.
Let there be x number of true if conditions. so total condition checks
=no of inner for loops*x
= 8.9e12 * x + 6.243e15
I'm not able to find the value of x. however, I believe it wouldnt be large enough to make total conditon checks greater than 2.4e24
public void zero() {
int sum = 0;
for (int i = 0; i < mArray.length; ++i) {
sum += mArray[i].mSplat;
}
}
public void one() {
int sum = 0;
Foo[] localArray = mArray;
int len = localArray.length;
for (int i = 0; i < len; ++i) {
sum += localArray[i].mSplat;
}
}
According to Android documentation, in above code, zero is slower. But I don't understand why ? well I haven't learn that much deep but as I know length is a field not method. So when loop retrieves its value, how its different from retrieving from local variable ? and array length is always fixed once initialized. What am I missing ?
Well I guess this is because at zero, he always needs to retrieve the information from mArray and in one, he has it accessible. This means, zero needs two "methods":
Access mArray
Access mArray.length
But one only needs one "methods":
Access len
In the first example, the JVM needs to first fetch the reference to the array and then access its length field.
In the second example, it only accesses one local variable.
On desktop JVMs this is generally optimised and the two methods are equivalent but it seems that Android's JVM does not do it... yet...
It is a matter of scope. Accessing an instance variable is slower than a method variable because it is not stored in the same memory places. (because method variables are likely to be accessed more often).
Same goes for len, but with an extra optimization. len cannot be changed from outside the method, and the compiler can see that it will never change. Therefore, its value is more predictable and the loop can be further optimized.
public void zero() {
int sum = 0;
for (int i = 0; i < mArray.length; ++i) {
sum += mArray[i].mSplat;
}
}
Here if you look at the for loop array length is calculated for every iteration, that degrades
the performance.
public void one() {
int sum = 0;
Foo[] localArray = mArray;
int len = localArray.length;
for (int i = 0; i < len; ++i) {
sum += localArray[i].mSplat;
}
}
In this case the length is calculated before for loop and then used in the loop.
I asked this question before, but my post was cluttered with a whole bunch of other code and wasn't clearly presented, so I'm going to try again. Sorry, I'm new here
Shell sort, how I wrote it, only works sometimes. Array a is an array of 100 integers unsorted, inc is an array of 4 integers whose values are the intervals that shell sort should use (they descend and the final value is always 1), count is an array which stores the counts for different runs of shell sort, cnt represents the count value which should be updated for this run of shell sort.
When I run shell sort multiple times, with different sets of 4 intervals, only sometimes does the sort fully work. Half the time the array is fully sorted, the other half of the time the array is partially sorted.
Can anyone help? Thanks in advance!
public static void shellSort(int[] a, int[] inc, int[] count, int cnt) {
for (int k = 0; k < inc.length; k++) {
for (int i = inc[k], j; i < a.length; i += inc[k]) {
int tmp = a[i];
count[cnt] += 1;
for (j = i - inc[k]; j >= 0; j -= inc[k]) {
if (a[j] <= tmp)
break;
a[j + inc[k]] = a[j];
count[cnt] += 1;
}
a[j + inc[k]] = tmp;
count[cnt] += 1;
}
}
}
One problem is that you're only sorting one inc[k]-step sequence for each k, while you should sort them all (you're only sorting {a[0], a[s], a[2*s], ... , a[m*s]}, leaving out {a[1], a[s+1], ... , a[m*s+1]} etc.). However, that should only influence performance (number of operations), not the outcome, since the last pass is a classical insertion sort (inc[inc.length-1] == 1), so that should sort the array no matter what happened before.
I don't see anything in the code that would cause failure. Maybe the inc array doesn't contain what it should? If you print out inc[k] in each iteration of the outer loop, do you get the expected output?
There is an error in your i loop control:
for (int i = inc[k], j; i < a.length; i += inc[k]) {
Should be:
for (int i = inc[k], j; i < a.length; i++) {
The inner j loop handles the comparison of elements that are inc[k] apart. The outer i loop should simply increment by 1, the same as the outer loop of a standard Insertion sort.
In fact, the final pass of Shellsort with an increment of 1 is identical to a standard Insertion sort.