efficiency of loop through a 2d array horizontally or vertically - java

I read an article in stackoverflow posted 4 years ago, see here:
Fastest way to loop through a 2d array?
Almost every answer agreed that scan horizontally will faster. I wrote a short Java program to check this, it turned out not the case. I choose 400x400 matrix. The time for scan horizontally is 6 and the time for scan vertically is 3. I checked other sizes of matrix. It also turned out scan vertically is faster. Do I miss something or is it indeed the case?
public class Test {
public static void main(String[] args) {
int row=Integer.parseInt(args[0]);
int column=Integer.parseInt(args[1]);
int[][] bigarray=new int[row][column];
long startTime = System.currentTimeMillis();
for(int i=0;i<row;i++)
for(int j=0;j<column;j++)
bigarray[i][j]=Math.abs(i-j)-Math.abs(i-j);
long endTime = System.currentTimeMillis();
long totalTime = endTime - startTime;
System.out.println("scan horizentally time is: ");
System.out.println(totalTime);
int[][] bigarray1=new int[row][column];
long startTime1 = System.currentTimeMillis();
for(int j=0;j<column;j++)
for(int i=0;i<row;i++)
bigarray1[i][j]=Math.abs(i-j)-Math.abs(i-j);
long endTime1 = System.currentTimeMillis();
long totalTime1 = endTime1 - startTime1;
System.out.println("scan vertically time is: ");
System.out.println(totalTime1);
}
}

For the horizontal version, you could optimize the code:
for(int i=0;i<row;i++)
int[] rowArray = bigarray[i];
for(int j=0;j<column;j++)
rowArray[j]=Math.abs(i-j)-Math.abs(i-j);
I wouldn't be surprised if the first test is always slower with your test setup. Java takes a lot of warmup time... A better test setup might be to have two separate programs, and to take a few warmup loops before taking the time...

Related

How to graph by hand the growth rate of a program?

Below is a simple for loop. I understand using growth rate analysis to look at a program and determine its rate. However, my question is after a program runs and you actually see the speed of which it works, how would you graph the actual growth rate of the program?
long startTime = System.nanoTime();
int sum = 0;
int N = 1000000;
for (int i=0; i<N; i++)
{
sum += Math.sqrt(i);
}
long endTime = System.nanoTime();
long duration = (endTime - startTime);
System.out.println("Here is the time it takes "+ duration);
Big O follows a certain trend depending on the complexity of the program.
Types of Big O Explained
Big O explained
I have listed some resources that I think should be useful.
Also if you don't need to draw it by hand, try looking up some Java resources that generate graphs.

Working of size() method of list in java

I have 2 pieces of code and first part is here
int count = myArrayList.size();
for (int a =0; a< count; a++) {
//any calculation
}
Second part of the code is
for (int a =0; a< myArrayList.size(); a++) {
//any calculation
}
in both piece I am iterating over myArrayList (this is ArrayList) size but in first part I am calculating size then iterating it means the method size is being called only once but on the other hand in second part whenever it iterates it calculates the size first then then check for size in less than or not.Isn't it long process ?I have seen in many examples in many places (which calculate size on every iteration).
My questions:
Isn't it long process? (talking about second part)
what is best practice first or second?
which is efficient way to perform iteration?
myArrayList.size() how this size method works or calculates the size?
EDITION:
For testing the same thing I wrote programs and calculated the time the code is
ArrayList<Integer> myArrayList = new ArrayList<>();
for (int a =0; a<1000; a++) {
myArrayList.add(a);
}
long startTime = System.nanoTime();
for (int a =0; a< myArrayList.size(); a++) {
//any calculation
}
long lastTime = System.nanoTime();
long result = lastTime - startTime;
and the result is = 34490 nano seconds
on the other hand
ArrayList<Integer> myArrayList = new ArrayList<>();
for (int a =0; a<1000; a++) {
myArrayList.add(a);
}
long startTIme = System.nanoTime();
int count = myArrayList.size();
for (int a =0; a< count; a++) {
}
long endTime = System.nanoTime();
long result = endTime - startTIme;
and the result is = 11394 nano seconds
here when calling size() method in every iteration taking much time then without calling it every call.Is this the right way to check the time calculation?
No. The call is not a "long running" process, the JVM can make function calls quickly.
Either is acceptable. Prefer the one that's easier to read. Adding a local reference with a meaningful name can make something easier to read.
You might prefer the for-each loop, but for readability1. There is no appreciable efficiency difference in your options (or with the for-each).
The ArrayList implementation keeps an internal count (in the OpenJDK implementation, and probably others, that is size) and manages the internal array that backs the List.
1See also The Developer Insight Series, Part 1: Write Dumb Code

how do I create a program that uses a for loop to print the first 1000 perfect squares?

This is what I have so far. I have to write a for loop that prints the first 1000 perfect squares and also determines the time of execution. the timer is working but I don't really know how to show the 1000 perfect squares.
public class WhileLoop {
public static void main(String[] args) {
long time_start, time_finish;
time_start = time();
int i;
for( i =0; i< 1000; i++){
{
System.out.print("");
}
System.out.println(i);
}
time_finish = time();
System.out.println(time_finish - time_start + " milli seconds");
}
public static long time(){
Calendar cal = Calendar.getInstance();
return cal.getTimeInMillis();
}
}
You got to break down the problem 1 by 1. Long story short. You should do the following:
Get Start Time
Print 1*1
Print 2*2
Print 3*3 ... to first 1000 squares
Print Total Execution Time
Here are the code snippets you will need to get this to work:
Then using the tools that Java provides. You can:
Retrieve Current Time (Store this into a startTime variable)
Loop through 1000 elements and print results
Retrieve Current Time (Compare this with the startTime to get total RunTime)
I don't want to give away too much. So, hopefully this will provide a good push in the right direction.

Java: Calculate how long sorting an array takes

I have some code that generates 1000 numbers in an array and then sorts them:
import java.util.Arrays;
import java.util.Random;
public class OppgA {
public static void main(String[] args) {
int[] anArray;
anArray = new int[1000];
Random generator = new Random();
for(int i=0; i<1000; i++){
anArray[i] = (generator.nextInt(1000)+1);
}
Arrays.sort(anArray);
System.out.println(Arrays.toString(anArray));
}
}
Now I'm asked to calculate and print the time it took to sort the array. Any clues how I can do this? I really couldn't find much by searching that could help me out in my case.
Thanks!
You can call (and store the result of) System.nanoTime() before and after the call to Arrays.sort()- the difference is the time spent in nanoseconds. That method is preferred over System.currentTimeMillis to calculate durations.
long start = System.nanoTime();
Arrays.sort(anArray);
long end = System.nanoTime();
long timeInMillis = TimeUnit.MILLISECONDS.convert(end - start, TimeUnit.NANOSECONDS);
System.out.println("Time spend in ms: " + timeInMillis);
But note that the result of your measurement will probably vary widely if you run the program several times. To get a more precise calculation would be more involved - see for example: How do I write a correct micro-benchmark in Java?.
Before sorting, declare a long which corresponds to the time before you start the sorting:
long timeStarted = System.currentTimeMillis();
//your sorting here.
//after sorting
System.out.println("Sorting last for:" + (System.currentTimeMillis() - timeStarted));
The result will return the milli seconds equivalent of your sorting.
As assylias commented you can also use System.nanoTime() if you prefer precise measurements of elapsed time.
Proper microbenchmarking is done using a ready-made tool for that purpose, like Google Caliper or Oracle jmh. However, if you want a poor-man's edition, follow at least these points:
measure with System.nanoTime() (as explained elsewhere). Do not trust small numbers: if you get timings such as 10 microseconds, you are measuring a too short timespan. Enlarge the array to get at least into the milliseconds;
repeat the sorting process many times (10, 100 perhaps) and display the timing of each attempt. You are expected to see a marked drop in the time after the first few runs, but after that the timings should stabilize. If you still observe wild variation, you know something's amiss;
to avoid garbage collection issues, reuse the same array, but re-fill it with new random data each time.
long beforeTime = System.currentTimeMillis();
// Your Code
long afterTime = System.currentTimeMillis();
long diffInMilliSeconds = afterTime- beforeTime;
before starting the calculation or exactly after generating the array you can use System#currentTimeMillis() to get the exact time and do the same exactly after completion of sorting and then find the difference.
do it this way :
long start = System.currentTimeMillis();
...
your sorting code
...
long end = System.currentTimeMillis();
long timeInMillis = end - start;
Hope that helps.
import java.util.Arrays;
import java.util.Random;
public class OppgA {
public static void main(String[] args) {
int[] anArray;
anArray = new int[1000];
Random generator = new Random();
for(int i=0; i<1000; i++){
anArray[i] = (generator.nextInt(1000)+1);
}
Date before = new Date();
Date after;
Arrays.sort(anArray);
after = new Date();
System.out.println(after.getTime()-before.getTime());
System.out.println(Arrays.toString(anArray));
}
}
This is not an ideal way. But this will work
long startingTime=System.currentTimeMillis();
Arrays.sort(anArray);
long endTime=System.currentTimeMillis();
System.out.println("Sorting time: "+(endTime-startingTime)+"ms");
Following can be the best way
long startingTime=System.nanoTime();
Arrays.sort(anArray);
long endTime=System.nanoTime();
System.out.println("Sorting time: "+(endTime-startingTime)+"ns");
In short, you can either extract our code to a method and than calculate the difference between the timestamps of start and end of that method or you can just run it in a profiler or an IDE and it will print the execution time
Ideally, you should not mix your business logic (array sorting in this case) with 'metrics' stuff.If you do need to measure execution time within the app, you can try to use AOP for that
Please refer to this post , which describes possible solutions in very detail

adding run time on my java program

what is the correct code to calculating time in Java with
public static int getGcd( int a, int b, int temp) format?
A simple solution:
First, Grab and store the time before you start the piece of code you want the run time for:
long start =System.currentTimeMillis();
After the code that you are tracking grab the current time and subtract it from your starting point to get the total time elapsed:
System.out.println(System.currentTimeMillis() - start);
If it runs relatively fast and you're trying to get an average time by running it on a bunch of random inputs, use:
long totalTime = 0;
long start = System.nanoTime();
for(int i=0;i<n;i++){
//Generate a and b
getGcd(a, b);
}
long end = System.nanoTime();
totalTime = end - start;
start = System.nanoTime();
for (int i=0;i<n;i++){
//Generate a and b
}
end = System.nanoTime();
totalTime -= end - start;
return totalTime / n;
This gives you your average time in nanoseconds.
Finding the average running time of GCD is a very interesting and complex problem. In the worst case, the inputs have a ratio which is close to the golden mean (such as consecutive Fibonacci numbers) and then the running time is O(log n). But it's still possible to have extremely large inputs and end up with essentially constant time. I'd be curious to know your results.

Categories