printf always shows 0 processor time as output - java

I wanted to compare two different ways of testing odd or even and I thought of testing which is faster so I tried using the clock() function and clock_t variables.
Nothing seemed to work. I searched a lot on the web and modified my code based on answers I found on stackoverflow, but still nothing.
This is my code:
#include<stdio.h>
#include<stdlib.h>
#include<time.h>
#include<stdint.h>
clock_t startm, stopm;
#define START if ( (startm = clock()) == -1) {printf("Error calling clock");exit(1);}
#define STOP if ( (stopm = clock()) == -1) {printf("Error calling clock");exit(1);}
#define PRINTTIME printf( "%ju ticks used by the processor.", (uintmax_t)(stopm-startm));
#define COUNT 18446744073709551600
#define STEP COUNT/100
int timetest(void){
unsigned long long int i = 0, y =0 , x = 76546546545541; // x = a random big odd number
clock_t startTime,stopTime;
printf("\nstarting bitwise method :\n");
START;
for(i = 0 ; i < COUNT ; i++){
if(x&1) y=1;
}
STOP;
printf("\n");
PRINTTIME;
y=0;
printf("\nstarting mul-div method :\n");
START;
for(i = 0; i < COUNT ; i++){
if(((x/2)*2) != x ) y=1;
}
STOP;
printf("\n");
PRINTTIME;
printf("\n\n");
return 0;
}
I'm always getting 0 ticks used by the processor. as the output.
Any help would be highly appreciated.
edit :
iv had enough of compiler issues.
created a java version of the above program. gives me answers. though its for the java platform.
public class test {
private final static int count = 500000000;
private final static long num = 55465465465465L;
private final static int loops = 25;
private long runTime;
private long result;
private long bitArr[] = new long[loops];
private long mulDivArr[] = new long[loops];
private double meanVal;
private void bitwiser() {
for (int i = 0; i < count; i++) {
result = num & 1;
}
}
private void muldiv() {
for (int i = 0; i < count; i++) {
result = (num / 2) * 2;
}
}
public test() {
// run loops and gather info
for (int i = 0; i < loops; i++) {
runTime = System.currentTimeMillis();
bitwiser();
runTime = System.currentTimeMillis() - runTime;
bitArr[i] = runTime;
runTime = System.currentTimeMillis();
muldiv();
runTime = System.currentTimeMillis() - runTime;
mulDivArr[i] = runTime;
}
// calculate stats
meanVal = stats.mean(bitArr);
System.out.println("bitwise time : " + meanVal);
meanVal = stats.mean(mulDivArr);
System.out.println("muldiv time : " + meanVal);
}
public static void main(String[] args) {
new test();
}
}
final class stats {
private stats() {
// empty
}
public static double mean(long[] a) {
if (a.length == 0)
return Double.NaN;
long sum = sum(a);
return (double) sum / a.length;
}
public static long sum(long[] a) {
long sum = 0L;
for (int i = 0; i < a.length; i++) {
sum += a[i];
}
return sum;
}
}
output (in millisecs) :
bitwise time : 1109.52
muldiv time : 1108.16
on average , bitwise seems to be a tad slower than muldiv.

This:
#define COUNT 18446744073709551600
will overflow, you must append ULL to make the literal have type unsigned long long.

Related

How to fix method cannot be applied to given types

I am creating a program to do a binary search.
The method should be repeated a different number of times and and i want to print out the time it took to repeat the method.
They way the code is now i get an error while compiling method binary search cannot be compiled to given types and i dont know how to fix it.
public class Bin_search {
public static void main(String[] args) {
int z = Integer.parseInt(args[0]);
double k = (int)(Math.random()*1000001);
int n = 1000000;
int arr[] = new int[n];
int i = 0;
for(i = 0;i<n;i++){
arr[i] = i;
}
long startTime = System.currentTimeMillis();
for(int t= 0; t<z; t++) {
binarySearch();
}
long stopTime = System.currentTimeMillis();
long elapsedTime = stopTime - startTime;
System.out.println("It took " + elapsedTime + " ms to repeat the algorithm.");
}
int binarySearch(int n, double k, int arr[]) {
int li = 0;
int re = n+1;
int m;
while (li < re-1) {
m = (li + re) / 2;
if (k <=arr[m]){
re = m;
}
else{
li = m;
}
}
return re;
}
}
First of all, the binarySearch method is made by three arguments, and you are not providing them, secondly you should make your binarySearch method static, otherwise it cannot be called from the main method without creating an instance first.
It should be like this, i think
public class Main {
public static void main(String[] args) {
args = new String[3];
args[0] = "100";
int z = Integer.parseInt(args[0]);
double k = (int)(Math.random() * 1000001);
int n = 1000000;
int arr[] = new int[n];
int i = 0;
for(i = 0;i<n;i++){
arr[i] = i;
}
long startTime = System.currentTimeMillis();
for(int t= 0; t<z; t++) {
binarySearch(n, k, arr);
}
long stopTime = System.currentTimeMillis();
long elapsedTime = stopTime - startTime;
System.out.println("It took " + elapsedTime + " ms to repeat the algorithm.");
}
static int binarySearch(int n, double k, int arr[]) {
int li = 0;
int re = n+1;
int m;
while (li < re-1) {
m = (li + re) / 2;
if (k <=arr[m]){
re = m;
}
else{
li = m;
}
}
return re;
}
}
EDIT: Now it works, but please check if your application logic to be sure it's doing what you are expecting to do

Why is running time of the same algorithm different in doubling ratio test?

I am analyzing brute force Three Sum algorithm. Let's say the running time of this algorithm is T(N)=aN^3. What I am doing is that I am running this ThreeSum.java program with 8Kints.txt and using that running time to calculate constant a. After calculating a I am guessing what the running time of 16Kints.txt is. Here is my ThreeSum.java file:
public class ThreeSum {
public static int count(int[] a) {
// Count triples that sum to 0.
int N = a.length;
int cnt = 0;
for (int i = 0; i < N; i++)
for (int j = i + 1; j < N; j++)
for (int k = j + 1; k < N; k++)
if (a[i] + a[j] + a[k] == 0)
cnt++;
return cnt;
}
public static void main(String[] args) {
In in = new In(args[0]);
int[] a = in.readAllInts();
Stopwatch timer = new Stopwatch();
int count = count(a);
StdOut.println("elapsed time = " + timer.elapsedTime());
StdOut.println(count);
}
}
When I run like this:
$ java ThreeSum 8Kints.txt
I get this:
elapsed time = 165.335
And now in doubling ratio experiment where I use the same method inside another client and run this client with multiple files as arguments and wanna try to compare the running time of 8Kints.txt with above method but I get different result actually faster result. Here is my DoublingRatio.java client:
public class DoublingRatio {
public static double timeTrial(int[] a) {
Stopwatch timer = new Stopwatch();
int cnt = ThreeSum.count(a);
return timer.elapsedTime();
}
public static void main(String[] args) {
In in;
int[][] inputs = new int[args.length][];
for (int i = 0; i < args.length; i++) {
in = new In(args[i]);
inputs[i] = in.readAllInts();
}
double prev = timeTrial(inputs[0]);
for (int i = 1; i < args.length; i++) {
double time = timeTrial(inputs[i]);
StdOut.printf("%6d %7.3f ", inputs[i].length, time);
StdOut.printf("%5.1f\n", time / prev);
prev = time;
}
}
}
When I run this like:
$ java DoublingRatio 1Kints.txt 2Kints.txt 4Kints.txt 8Kints.txt 16Kints.txt 32Kints.txt
I get faster reuslt and I wonder why:
N sec ratio
2000 2.631 7.8
4000 4.467 1.7
8000 34.626 7.8
I know it is something that has to do with Java not the algorithm? Does java optimizes some things under the hood.

Sudden infinite loop above certain input argument?

While learning Java I'm redoing some of the Project Euler problems.
This is about Problem 14 - Longest Collatz sequence: https://projecteuler.net/problem=14
My programm runs just fine for a lower CEILING like 1000, but when executed like posted it loops infinitely, I think? What goes wrong here?
public class Test {
public static void main(String[] args) {
int tempMax = 0;
final int CEILING = 1_000_000;
for (int j = 1; j < CEILING; ++j) {
tempMax = Math.max(tempMax, collatzLength(j));
}
System.out.println(tempMax);
}
static int collatzLength(int n) { //computes length of collatz-sequence starting with n
int temp = n;
for (int length = 1; ; ++length) {
if (temp == 1)
return length;
else if (temp % 2 == 0)
temp /= 2;
else
temp = temp * 3 + 1;
}
}
}
Calling System.out.println(collatzLength(1000000)); seperately works just fine so I think we can rule an error here out.
You should use long instead of int. The int overflows while doing your calculations in collatzLength and that causes the infinite loop. From the problem description:
NOTE: Once the chain starts the terms are allowed to go above one million.
The number causing the problem: 113383
The long version gives a result, which is still incorrect because you are printing the length of the longest chain, but you need the number which produces the longest chain.
public static void main(String[] args)
{
int tempMax = 0;
final int CEILING = 1_000_000;
for (int j = 1; j < CEILING; ++j)
{
tempMax = Math.max(tempMax, collatzLength(j));
}
System.out.println(tempMax);
}
static int collatzLength(long n)
{
long temp = n;
for (int length = 1;; ++length)
{
if (temp == 1)
return length;
else if (temp % 2 == 0)
temp /= 2;
else
temp = temp * 3 + 1;
}
}

Error regarding arguments when calling a method

I am getting an error when making the call to method findPosition. What I'm trying to do with the program is measure how long the algorithm runs on average, over 1000 iterations of it. Then I want to call the method and find the position of the value (if it is found) in the array. I'm getting a compiler error that the arguments being passed do not match the method. Specifically, the second argument is being taken as an int, even though I want to pass the value in an array. I can't find my error and am new to working with arrays, so thank you in advance if you can tell me where my mistake is.
import java.util.*;
public class AlgorithmRuntime
{
static int num = 0;
static long total = 0;
static long average = 0;
public static void main (String[] args)
{
boolean isValueInArray;
int searchValue;
do {
// 1. Setup
int size = 1000;
long sum = 0;
int[] iArray = new int[size];
Random rand = new Random(System.currentTimeMillis());
for (int i = 0; i < size; i++)
iArray[i] = rand.nextInt();
searchValue = rand.nextInt(1000) + 1;
// 2. Start time
long start = System.nanoTime();
// 3. Execute algorithm
for (int j = 0; j < size; j++)
{
if (iArray[j] == searchValue)
{
isValueInArray = true;
}
if (isValueInArray == true)
findPosition(searchValue, iArray[isValueInArray]);
}
// 4. Stop time
long stop = System.nanoTime();
long timeElapsed = stop - start;
total = total + timeElapsed;
num++;
} while (num < 1000);
average = total / 1000;
System.out.println("The algorithm took " + average
+ " nanoseconds on average to complete.");
}
}
public int findPosition(int valueOfInt, int[] array)
{
for (int i = 0; i < array.length; i++)
if (array[i] == valueOfInt)
return i;
return -1;
}
findPostion method accepts two arguments; one of type int and another of type int array. Here is the signature of findPosition:
public int findPosition(int valueOfInt, int[] array)
but you are passing it two int values as mentioned here:
findPosition(searchValue, iArray[isValueInArray];
isArray is an int array but iArray[isValueInArray] is an int value at index isValueInArray.
You need to pass an array as the second param instead of an int value.
You need to pass in
if (isValueInArray == true) {
findPosition(searchValue, iArray);
}
iArray[isValueInArray] will give you an int value and not the array.

Inline Code is Slower Than Function Calls / Static Functions in Java

I've been running some tests to see how inlining function code (explicitly writing function algorithms in the code itself) affects performance. I wrote a simple byte array to integer code and then wrapped it in a function, called it statically from another class, and called it statically from the class itself. The code is as follows:
public class FunctionCallSpeed {
public static final int numIter = 50000000;
public static void main (String [] args) {
byte [] n = new byte[4];
long start;
System.out.println("Function from Static Class =================");
start = System.nanoTime();
for (int i = 0; i < numIter; i++) {
StaticClass.toInt(n);
}
System.out.println("Elapsed time: " + (double)(System.nanoTime() - start) / 1000000000 + "s");
System.out.println("Function from Class ========================");
start = System.nanoTime();
for (int i = 0; i < numIter; i++) {
toInt(n);
}
System.out.println("Elapsed time: " + (double)(System.nanoTime() - start) / 1000000000 + "s");
int actual = 0;
int len = n.length;
System.out.println("Inline Function ============================");
start = System.nanoTime();
for (int i = 0; i < numIter; i++) {
for (int j = 0; j < len; j++) {
actual += n[len - 1 - j] << 8 * j;
}
}
System.out.println("Elapsed time: " + (double)(System.nanoTime() - start) / 1000000000 + "s");
}
public static int toInt(byte [] num) {
int actual = 0;
int len = num.length;
for (int i = 0; i < len; i++) {
actual += num[len - 1 - i] << 8 * i;
}
return actual;
}
}
The results are as follows:
Function from Static Class =================
Elapsed time: 0.096559931s
Function from Class ========================
Elapsed time: 0.015741711s
Inline Function ============================
Elapsed time: 0.837626286s
Is there something weird going on with the bytecode? I've looked at the bytecode myself, but I'm not very familiar and I can't make heads or tails of it.
EDIT
I added assert statements to read the outputs and then randomized the bytes read and the benchmark now behaves the way I thought it would. Thanks to Tomasz Nurkiewicz, who pointed me to the microbenchmark article. The resulting code is thus:
public class FunctionCallSpeed {
public static final int numIter = 50000000;
public static void main (String [] args) {
byte [] n;
long start, end;
int checker, calc;
end = 0;
System.out.println("Function from Object =================");
for (int i = 0; i < numIter; i++) {
checker = (int)(Math.random() * 65535);
n = toByte(checker);
start = System.nanoTime();
calc = StaticClass.toInt(n);
end += System.nanoTime() - start;
assert calc == checker;
}
System.out.println("Elapsed time: " + (double)end / 1000000000 + "s");
end = 0;
System.out.println("Function from Class ==================");
start = System.nanoTime();
for (int i = 0; i < numIter; i++) {
checker = (int)(Math.random() * 65535);
n = toByte(checker);
start = System.nanoTime();
calc = toInt(n);
end += System.nanoTime() - start;
assert calc == checker;
}
System.out.println("Elapsed time: " + (double)end / 1000000000 + "s");
int len = 4;
end = 0;
System.out.println("Inline Function ======================");
start = System.nanoTime();
for (int i = 0; i < numIter; i++) {
calc = 0;
checker = (int)(Math.random() * 65535);
n = toByte(checker);
start = System.nanoTime();
for (int j = 0; j < len; j++) {
calc += n[len - 1 - j] << 8 * j;
}
end += System.nanoTime() - start;
assert calc == checker;
}
System.out.println("Elapsed time: " + (double)(System.nanoTime() - start) / 1000000000 + "s");
}
public static byte [] toByte(int val) {
byte [] n = new byte[4];
for (int i = 0; i < 4; i++) {
n[i] = (byte)((val >> 8 * i) & 0xFF);
}
return n;
}
public static int toInt(byte [] num) {
int actual = 0;
int len = num.length;
for (int i = 0; i < len; i++) {
actual += num[len - 1 - i] << 8 * i;
}
return actual;
}
}
Results:
Function from Static Class =================
Elapsed time: 9.276437031s
Function from Class ========================
Elapsed time: 9.225660708s
Inline Function ============================
Elapsed time: 5.9512E-5s
It's always hard to make a guarantee of what the JIT is doing, but if I had to guess, it noticed the return value of the function was never being used, and optimized a lot of it out.
If you actually use the return value of your function I bet it changes the speed.
I ported your test case to caliper:
import com.google.caliper.SimpleBenchmark;
public class ToInt extends SimpleBenchmark {
private byte[] n;
private int total;
#Override
protected void setUp() throws Exception {
n = new byte[4];
}
public int timeStaticClass(int reps) {
for (int i = 0; i < reps; i++) {
total += StaticClass.toInt(n);
}
return total;
}
public int timeFromClass(int reps) {
for (int i = 0; i < reps; i++) {
total += toInt(n);
}
return total;
}
public int timeInline(int reps) {
for (int i = 0; i < reps; i++) {
int actual = 0;
int len = n.length;
for (int i1 = 0; i1 < len; i1++) {
actual += n[len - 1 - i1] << 8 * i1;
}
total += actual;
}
return total;
}
public static int toInt(byte[] num) {
int actual = 0;
int len = num.length;
for (int i = 0; i < len; i++) {
actual += num[len - 1 - i] << 8 * i;
}
return actual;
}
}
class StaticClass {
public static int toInt(byte[] num) {
int actual = 0;
int len = num.length;
for (int i = 0; i < len; i++) {
actual += num[len - 1 - i] << 8 * i;
}
return actual;
}
}
And indeed seems like inlined version is the slowest while two static versions are almost the same (as expected):
The reasons are hard to imagine. I can think of two factors:
JVM is better in performing micro-optimizations when code blocks are as small and simple to reason about as possible. When the function is inlined, the whole code becomes more complex and JVM gives up. With smaller toInt() function it JIT is more clever
cache locality - somehow JVM performs better with two small chunks of code (loop and method) rather than one bigger
You have several problems, but the main one is that you are testing one iteration of one optimised code. That is sure to give you mixed results. I suggest running the test for 2 seconds, ignoring the first 10,000 iterations or so.
If the result of a loop is not kept, the entire loop can be discarded after some random interval.
Breaking each test into a separate method
public class FunctionCallSpeed {
public static final int numIter = 50000000;
private static int dontOptimiseAway;
public static void main(String[] args) {
byte[] n = new byte[4];
for (int i = 0; i < 10; i++) {
test1(n);
test2(n);
test3(n);
System.out.println();
}
}
private static void test1(byte[] n) {
System.out.print("from Static Class: ");
long start = System.nanoTime();
for (int i = 0; i < numIter; i++) {
dontOptimiseAway = FunctionCallSpeed.toInt(n);
}
System.out.print((System.nanoTime() - start) / numIter + "ns ");
}
private static void test2(byte[] n) {
long start;
System.out.print("from Class: ");
start = System.nanoTime();
for (int i = 0; i < numIter; i++) {
dontOptimiseAway = toInt(n);
}
System.out.print((System.nanoTime() - start) / numIter + "ns ");
}
private static void test3(byte[] n) {
long start;
int actual = 0;
int len = n.length;
System.out.print("Inlined: ");
start = System.nanoTime();
for (int i = 0; i < numIter; i++) {
for (int j = 0; j < len; j++) {
actual += n[len - 1 - j] << 8 * j;
}
dontOptimiseAway = actual;
}
System.out.print((System.nanoTime() - start) / numIter + "ns ");
}
public static int toInt(byte[] num) {
int actual = 0;
int len = num.length;
for (int i = 0; i < len; i++) {
actual += num[len - 1 - i] << 8 * i;
}
return actual;
}
}
prints
from Class: 7ns Inlined: 11ns from Static Class: 9ns
from Class: 6ns Inlined: 8ns from Static Class: 8ns
from Class: 6ns Inlined: 9ns from Static Class: 6ns
This suggest that when the inner loop is optimised separately it is slightly more efficient.
However if I use an optimised conversion of bytes to int
public static int toInt(byte[] num) {
return num[0] + (num[1] << 8) + (num[2] << 16) + (num[3] << 24);
}
all the tests report
from Static Class: 0ns from Class: 0ns Inlined: 0ns
from Static Class: 0ns from Class: 0ns Inlined: 0ns
from Static Class: 0ns from Class: 0ns Inlined: 0ns
as its realised the test doesn't do anything useful. ;)
Your test is flawed. The second test is having the benefit of the first test already being run. You need to run each test case in its own JVM invocation.

Categories