Suppose I have two equally long arrays of numbers. I want to create a third array such that:
c[0] = a[0] * b[0]
c[1] = a[1] * b[1]
...
If I were in Matlab, I could write a loop that performed the multiplication like this:
for i=1:length(a)
c(i) = a(i) * b(i);
end
but I know that it's good to avoid for loops, and there's a way to do that, which is:
c = a .* b;
This makes sense to me, and having timed it (tic toc) several times on two 8192-length arrays of random numbers, the .* method consistently finishes about 3x faster than the for loop.
So now I want to multiply the arrays in Java. So I write a for loop and say:
for (int i=0; i<a.length; i++) {
c[i] = a[i] * b[i];
}
My question is: is there a better way of doing this that avoids the for loop? And if there is, does it make a difference? In my mind, it runs faster without the for loop because it's multiplying the numbers in parallel instead of in series, but I have no idea what's going on under the hood (like if the compiler is unrolling the loop on its own).
There are (at least) two reasons why .* is faster than an explicit loop in Matlab. By explicit I mean a loop written in Matlab code, as opposed to internal loops that Matlab functions might be using. The reasons are:
.* is vectorized. This means that, although it very likely does the computations internally with a loop, that loop has been coded in some faster language than Matlab itself.
.* is multithreaded, and so it benefits from multiple cores running in parallel.
So in Matlab, whenever there is a built-in vectorized function, you should use it. Although the speed of Matlab's explicit loops has improved in recent years (thanks to JIT compiling for example), they are still slower than their vectorized versions.
Java follows a more conventional approach, in which explicit loops are the norm. They are not slow, and generally there are not vectorized functions that can replace them. So I'd say an explicit loop is the way to go in Java.
Although YOU are not writing a loop in Matlab, underneath, it's most likely that there is some kind of loop, and even maybe more than one (we'd have to check the source code). There is nothing magic in Matlab. It's just a "simplified" language, where underneath there are more complex code generated.
Your Java loop is the correct way.
Related
I've got a little program that is a fairly pointless exercise in simple number crunching that has thrown me for a loop.
The program spawns a bunch of worker threads that do simple mathematical operations. Recently I changed the inner loop of one variant of worker from:
do
{
int3 = int1 + int2;
int3 = int1 * int2;
int1++;
int2++;
i++;
}
while (i < 128);
to something akin to:
int3 = tempint4[0] + tempint5[0];
int3 = tempint4[0] * tempint5[0];
int3 = tempint4[1] + tempint5[1];
int3 = tempint4[1] * tempint5[1];
int3 = tempint4[2] + tempint5[2];
int3 = tempint4[2] * tempint5[2];
int3 = tempint4[3] + tempint5[3];
int3 = tempint4[3] * tempint5[3];
...
int3 = tempint4[127] + tempint5[127];
int3 = tempint4[127] * tempint5[127];
The arrays are populated by random integers no higher than 1025 in value, and the array values do not change.
The end result was that the program ran much faster, though closer examination seems to indicate that the CPU isn't actually doing anything when running the newer version of the code. It seems that the JVM has figured out that it can safely ignore the code that replaced the inner loop after one iteration of the outer loop since it is only redoing the same calculations on the same set of data over and over again.
To illustrate my point, the old code took maybe ~27000 ms to run and noticeably increased the operating temperature of the CPU (it also showed 100% utilization for all cores). The new code takes maybe 5 ms to run (sometimes less) and causes nary a spike in CPU utilization or temperature. Increasing the number of outer loop iterations does nothing to change the behavior of the new code, even when the number of iterations increases by a hundred times or more.
I have another version of the worker that is identical to the one above except that it has a division operation along with the addition and multiplication operations. In its new unrolled form, the division-enabled version is also much faster than it's previous form, but it actually takes a little while (~300 ms on the first run and ~200 ms on subsequent runs, despite warmup, which is a little odd) and produces a profound spike in CPU temperature for its brief run. Increasing the number of outer loop iterations seems to cause the temperature phenomenon to mostly cease after a certain amount of time has passed while running the program, though utilization still shows 100% for all cores. My guess is the JVM is taking much longer to figure out which operations it can safely ignore when handling division operations, and that it is not ignoring all of them.
Short of adding division operations to all my code (which isn't really a fix anyway beyond a certain number of outer loop iterations), is there any way I can get the JVM to stop reducing my code to apparent NOOPs? I've tried several solutions to the problem, such as generating new random values per iteration of the outer loop, going back to simple integer variables with incrementation, and some other nonsense, but none of those solutions have produced desirable results. Either it continues to ignore the series of instructions, or the performance hit from modifications is bad enough that my division-heavy variant actually performs better than the code without division operations.
edit: to provide some context:
i: this variable is an integer that is used for a loop counter in a do/while loop. It is defined in the class file containing the worker code. It's initial value is 0. It is no longer used in the newer version of the worker.
int1/int2: These are integers defined in the class file containing the worker code. Their initial values are both 0. They were used in the old version of the code to provide changing values for each iteration of the internal loop. All I had to do was increment them upward by one per loop iteration, and the JVM would be forced to carry out every operation faithfully. Unfortunately, this loop apparently prevented the use of SIMD. Each time the outer loop iterated, int1 and int2 had their values reset to prevent overflow of int1, int2, or int3 (I have discovered that integer overflow can slow down the code unnecessarily, as can allowing a float to reach Infinity).
tempint4/tempint5: These are references to a pair of integer arrays defined in the main class file for the program (Mathtester. Yes, unimaginative, I know). When the program first starts, there is a short do/while loop that fills each array with random integers randing from 1-1025. The arrays are 128 integers in size. Each array is static, though the reference variables are not. In truth there is no particular reason for me to use the reference variables. They are leftovers from when I was trying to do an array reference swap so that, after each iteration of the outer loop, tempint4 and tempint5 would be referred to the opposite array. It was my hope that the JVM would stop ignoring my code block. For the division-enabled version of the code, this seems to have worked (sort of), since it fundamentally changes the values to be calculated. Swapping tempint4 for tempint5 and vice versa does not change the results of the addition and multiplication operations, so the JVM can still ignore those.
edit: Making tempint4 and tempint5 (since they are only reference variables, I am actually referring to the main arrays, Mathtester.int4 and Mathtester.int5) volatile worked without notably reducing the amount of CPU activity or level or CPU temperature. It did slow down the code a bit, but that is a probable indicator that the JVM was NOOPing more than I knew.
Is there any way I can get the JVM to stop reducing my code to apparent NOOPs?
Yes, by making int3 volatile.
One of the first things when dealing with Java performance that you have to learn by heart is this:
"A single line of Java code means nothing at all in isolation".
Modern JVMs are very complex beasts, and do all kinds of optimization. If you try to measure some small piece of code, the chances are that you will not be measuring what you think you are - it is really complicated to do it correctly without very, very detailed knowledge of what the JVM is doing.
In this case, yes, it's entirely likely that the JVM is optimizing away the loop. There's no simple way to prevent it from doing this, and almost all techniques are fragile and JVM-version specific (because new & cleverer optimizations are developed & added to the JVM all the time).
So, let me turn the question around: "What are you really trying to achieve here? Why do you want to prevent the JVM from optimizing?"
I am learning python and seeing the difference in this loop conditions declarations I just have a question that how exactly the for loop in python is different from same algorithm for loopin C or Java, I know the difference in syntax but is there difference in the machine execution, and which is faster
for example
for i in range(0,10):
if i in range(3,7):
print i
and in java
for(int i=0,i<10;i++){
if i>=3 && i<7
system.out.println(i);
Here I just want to know about the difference in actual iterations over 'i' not the printing statements or the output of the code.
Also comment on the if condition used to check whether 'i' is in between 3 and 7. in python if I had used the similar statement if i>=3 and i <7: what difference would have it made.
I am using python2.7
If you're using python 2.x, then the range call creates a full-fledged list in memory, holding all the numbers in the range. This would be like populating a LinkedList with the numbers in Java, and iterating over it.
If you want to avoid the list, there's xrange. It returns an iterable object that does not create the temporary list, and is equivalent to the Java code you posted.
Note that the in condition is not equivalent to a manual bounds check. Python will iterate through the range in O(n) looking for the item.
In python 3.x, xrange is no more, and range returns an iterable.
This part of code is from dotproduct method of a vector class of mine. The method does inner product computing for a target array of vectors(1000 vectors).
When vector length is an odd number(262145), compute time is 4.37 seconds. When vector length(N) is 262144(multiple of 8), compute time is 1.93 seconds.
time1=System.nanotime();
int count=0;
for(int j=0;j<1000;i++)
{
b=vektors[i]; // selects next vector(b) to multiply as inner product.
// each vector has an array of float elements.
if(((N/2)*2)!=N)
{
for(int i=0;i<N;i++)
{
t1+=elements[i]*b.elements[i];
}
}
else if(((N/8)*8)==N)
{
float []vek=new float[8];
for(int i=0;i<(N/8);i++)
{
vek[0]=elements[i]*b.elements[i];
vek[1]=elements[i+1]*b.elements[i+1];
vek[2]=elements[i+2]*b.elements[i+2];
vek[3]=elements[i+3]*b.elements[i+3];
vek[4]=elements[i+4]*b.elements[i+4];
vek[5]=elements[i+5]*b.elements[i+5];
vek[6]=elements[i+6]*b.elements[i+6];
vek[7]=elements[i+7]*b.elements[i+7];
t1+=vek[0]+vek[1]+vek[2]+vek[3]+vek[4]+vek[5]+vek[6]+vek[7];
//t1 is total sum of all dot products.
}
}
}
time2=System.nanotime();
time3=(time2-time1)/1000000000.0; //seconds
Question: Could the reduction of time from 4.37s to 1.93s (2x as fast) be JIT's wise decision of using SIMD instructions or just my loop-unrolling's positive effect?
If JIT cannot do SIMD optimizaton automatically, then in this example there is also no unrolling optimization done automatically by JIT, is this true?.
For 1M iterations(vectors) and for vector size of 64, speedup multiplier goes to 3.5X(cache advantage?).
Thanks.
Your code has a bunch of problems. Are you sure you're measuring what you think you're measuring?
Your first loop does this, indented more conventionally:
for(int j=0;j<1000;i++) {
b=vektors[i]; // selects next vector(b) to multiply as inner product.
// each vector has an array of float elements.
}
Your rolled loop involves a really long chain of dependent loads and stores. Your unrolled loop involves 8 separate chains of dependent loads and stores. The JVM can't turn one into the other if you're using floating-point arithmetic because they're fundamentally different computations. Breaking dependent load-store chains can lead to major speedups on modern processors.
Your rolled loop iterates over the whole vector. Your unrolled loop only iterates over the first (roughly) eighth. Thus, the unrolled loop again computes something fundamentally different.
I haven't seen a JVM generate vectorised code for something like your second loop, but I'm maybe a few years out of date on what JVMs do. Try using -XX:+PrintAssembly when you run your code and inspect the code opto generates.
I have done a little research on this (and am drawing from knowledge from a similar project I did in C with matrix multiplication), but take my answer with a grain of salt as I am by no means an expert on this topic.
As for your first question, I think the speedup is coming from your loop unrolling; you're making roughly 87% fewer condition checks in terms of the for loop. As far as I know, JVM supports SSE since 1.4, but to actually control whether your code is using vectorization (and to know for sure), you'll need to use JNI.
See an example of JNI here: Do any JVM's JIT compilers generate code that uses vectorized floating point instructions?
When you decrease the size of your vector to 64 from 262144, cache is definitely a factor. When I did this project in C, we had to implement cache blocking for larger matrices in order to take advantage of the cache. One thing you might want to do is check your cache size.
Just as a side note: It might be a better idea to measure performance in flops rather than seconds, just because the runtime (in seconds) of your program can vary based on many different factors, such as CPU usage at the time.
This question already has answers here:
Using collection size in for loop comparison
(4 answers)
for loop optimization
(15 answers)
Closed 9 years ago.
I would like to ask more experienced developers about one simple, but for me not obvious, thing. Assume you have got such a code (Java):
for(int i=0; i<vector.size(); i++){
//make some stuff here
}
I came across such statements very often, so maybe there is nothing wrong in it. But for me, it seems unnecessary to invoke a size method in each iteration. I would use such approach:
int vectorSize = vector.size();
for(int i=0; i<vectorSize; i++){
//make some stuff here
}
The same thing here:
for(int i=0; i<myTreeNode.getChildren().size(); i++){
//make some stuff here
}
I am definitely not an expert in programming yet, so my question is: Am I seeking a gap where the hedge is whole or it is important to take care of such details in professional code?
A method invocation requires that the JVM does indeed do additional stuff. So what you're doing, at first view seems like an optimization.
However, some JVM implementations are smart enough to inline method calls, and for those, the difference will be nonexistent.
The Android programming guidelines for example always recommend doing what you've pointed out, but again, the JVM implementation manual (if you can get your hands on one) will tell you if it optimizes code for you or not.
Usually size() is a small constant-time operation and so the cost of calling size is trivial compared to the cost of executing the loop body, and the just in time compiler may be taking care of this optimization for you; therefore, there may not be much benefit to this optimization.
That said, this optimization does not adversely affect code readability, so it isn't something to be avoided; often code optimizations that only affect speed by a small factor (as opposed to e.g. an optimization that changes a O(n) operation to a O(1) operation) should be avoided for this reason, for example you can unroll a loop:
int i;
int vectorSizeDivisibleBy4 = vectorSize - vectorSize % 4; // returns lowest multiple of four in vectorSize
for(i = 0; i < vectorSizeDivisibleBy4; i += 4) {
// loop body executed on [i]
// second copy of loop body executed on [i+1]
// third copy of loop body executed on [i+2]
// fourth copy of loop body executed on [i+3]
}
for(; i < vectorSize; i++) { // in case vectorSize wasn't a factor of four
// loop body
}
By unrolling the loop four times you reduce the number of times that i < vectorSize is evaluated by a factor of four, at the cost of making your code an unreadable mess (it might also muck up the instruction cache, resulting in a negative performance impact). Don't do this. But, like I said, int vectorSize = vector.size() doesn't fall into this category, so have at it.
At the 1st sight the alternative you are suggesting seams an optimization, but in terms of speed it is identical to the common approach, because of:
the complexity time of the call of size() function in a java vector has a complexity of order O(1) since each vector has always stored a variable containing its size, so you don't need to calculate its size in each iteration, you just access it.
note:
you can see that the size() function in: http://www.docjar.com/html/api/java/util/Vector.java.html is just returning a protected variable elementCount.
The following code in python takes very long to run. (I couldn't wait until the program ended, though my friend told me for him it took 20 minutes.)
But the equivalent code in Java runs in approximately 8 seconds and in C it takes 45 seconds.
I expected Python to be slow but not this much, and in case of C which I expected to be faster than Java was actually slower. Is the JVM using some loop unrolling technique to achieve this speed? Is there any reason for Python being so slow?
import time
st=time.time()
for i in xrange(0,100000):
for j in xrange(0,100000):
continue;
print "Time taken : ",time.time()-st
Your test is not measuring anything meaningful.
A language's performance in the real world has little to do with how quickly it executes a tight loop.
Frankly, I'm intrigued that C and Java took as long as they did; I would have expected both of their compilers to realize that there was nothing happening inside the inner loop, and have optimized both of them away into nonexistence (and 0 seconds).
Python, on the other hand, is still interpreted (I could be wrong about this). In any case, it looks like the outer loop is needing to construct 100,000 xrange objects on which to run the empty inner loop, and that's unlikely to be optimized away.
So all you're really measuring is various compilers' ability to see through the fact that no real computing work is being done.
The lesson is: Performance is never what you expect. Therefore, always measure, never believe.
Some reasons why you might see these numbers (and from the first sentence, some of these might be completely wrong):
C is compiled for an "i586" processor (also called Pentium). That CPU was sold from 1993 to about 2000. Have you seen one lately? Guess not. So the C code isn't really optimized for what your CPU can do (or to put it another way around: Today's CPUs try very hard to be a fast Pentium CPU). Java, OTOH, is compiled for your CPU as the code is loaded. It can pull some tricks that C simply can't. The price is that the C program starts in 15ms while the Java program needs 4 Seconds.
Python has no JIT (just in time compiler), yet. All code is converted into bytecode which is then interpreted. This means the loop above is turned into a dozen bytecode instructions which are then interpreted by a C program. That just takes time. Python is not meant for huge loops, it's meant for smart algorithms which you simply can't express in any other language (at least not with the same amount of code and readability).
So just as it doesn't make sense to go shopping with a 18t truck (you can transport anything but you won't find a space to park it), chose your programming language according to the problem you need to solve. It has to be small&fast? C. Just fast? Java. Flexible? Python. Flexible&Fast: Python with a helper library in C (like NumPy).
Is there any reason for Python being so slow?
Yes.
But what does it matter? You've created 100,000 xrange objects. Why? What does that matter? What is your real question on performance? What algorithm do you actually have that's actually too slow?
for i in xrange(0,100000): # Creates one xrange object
for j in xrange(0,100000): # Creates a fresh xrange object each time through the loop
for i in xrange(0, 10000):
for j in xrange(0, 10000):
pass
or
for i in xrange(0, 100000000):
pass
Python 2.6.5 - Time taken : 8.50866484642
PyPy 1.3 - Time taken : 1.55999398232
reason of slow work not in creation of xrange objects
gcc 4.2 with the -O1 flag or higher optimize away the loop and the program takes 1 milli second to execute.
This benchmark is not very representative as it is very far from any real world use.
You're doing a nested loop for a reason, and you never leave it empty.
Python doesn't optimize away the loop, although I see no technical reason why it couldn't.
Python is slower than C because it's further from the machine language. xrange is a nice abstraction but it adds a heavy layer of machine code compared to a simple C loop.
C source:
int main( void ){
int i, j;
for (i=0;i<100000;i++){
for (j=0;j<100000;j++){
continue;
}
}
return 0;
}
A good compiler would optimise away the loop.
Assuming the loop isn't optimised away, I'd expect Python to be something like 100 times slower than the C version