I'm developing a math function and would like to test its output at every float value within a range. I have already done this in C++ but now I want to compare the performance to Java. How do I iterate through all float values in Java?
In C++, I'm simply iterating through the necessary range with an unsigned int, then reinterpreting it as a float pointer
float *x = reinterpret_cast<float*>(&i);
However, how can this be done in Java? Preferably quickly, as I am testing the performance (no String solutions thank you :D ). If there's no fast way, I guess I could just pre-calculate a million of them into an array and iterate through them. But that would mess up the cache performance, so I think using random numbers would then be better for my case, although it won't quite hit all values.
You can use Math.nextUp(float) to get the next float number.
Example to print the next 100 floats starting from 1:
float n = 1f;
System.out.println(n);
for (int i = 0; i < 100; i++) {
n = Math.nextUp(n);
System.out.println(n);
}
In Java you can use method Float.intBitsToFloat(int):
for (int i = iMin; i < iMax; i++) {
float f = Float.intBitsToFloat(i);
...
}
Related
I've searched about Big O notation for some time, and I learned that when calculating we have to assume that every sentence that doesn't depend on the size of the input data takes a constant C number computational steps.
My program goes like this.
It "always" takes in a "48 bit" random seed and produces an output, and the actual moves that happen within the process of producing the output varies according to the seed value itself, not by the size because it's fixed.
I for looped this process for n times, in order to get n outputs.
Does this mean the Big O notation is O(n) for my program? Or am I completely misunderstanding something?
So, the number of loops I just write in the code. For example, if I set it to 1000, it takes in 1000 input seeds and produces 1000 outputs. The process within the loop, so the number of for loops or the number of if - else or switch statements inside the bigger loop are fixed. The only thing that changes inside the bigger loop is which "if statement" to choose depending on the value of the seed.
The complexity is always expressed as relative to something.
Since your input length is constant, it doesn't make much sense to express the complexity relative to that. It may have O(n) complexity relative to number of loops, but again, since the value is hard-coded this information will have little value to the user.
Perhaps the most useful in your case is information about what the complexity is relative to the input value. If that is constant, you can say that your program performs in constant time, because no matter what the (valid) user input will be, the time it will take your program to produce the output will roughly be the same.
int f(int[] a, int[] b, int c) {
int n = a.length;
int m = b.length;
int y = 0;
for (int i = 0; i < n; ++i) {
y += a[i];
for (int j = 0; j < m; ++j) {
y -= b[j];
}
}
for (int k = 0; k < c; ++k) {
y += 13;
}
return y;
}
So the complexity is O(n.m)+O(c) counting steps (loop, recursive calls).
for (long k = seed; k > 1; k /= 2) {
...;
}
would give O(²log seed), at most 48.
Strictly said, O(n) means that this n is algorithm parameter/input of some kind or derived from it. It can be length of input, or even output, but it's derived from algorithm parameter.
So, this O(n) has a meaning when we talk about your procedure "loop this process for n times", if you automated this procedure with some script. Algorithm itself still works in O(1) time. If you don't automate procedure, just forget about Big O - manual action makes it irrelevant.
I have a program which multiplies a probability over 500 times, but when I am doing so the output is zero. Should I use some other data type?
Please help.
Here is the code I am using:
double d = 1/80000d;
for (int i = 0; i < 500; i++) {
d *= d;
}
System.out.println(d);
The output is zero because double has a limited percision, and if you multiply a number lower than 1 by itself enough times, you'll get a result too small to be distinguished from 0.
If you print d after each iteration, you'll see that it becomes 0 quite fast :
1.5625E-10
2.4414062500000002E-20
5.960464477539064E-40
3.552713678800502E-79
1.2621774483536196E-157
1.593091911E-314
0.0
When working with probabilities, you can avoid these sort of numerical issues by working instead with logarithms, so that you can work additively. Something like
double d = 1/80000d;
double ld = Math.log(d)
for (int i = 0; i < 500; i++) {
ld += ld;
}
System.out.println(ld);
Naturally, if you have two numbers less than 1, and repeated the multiply times sooner or later will be small enough to not be able resepresented in Double, Extended, or any floating arithmetic it done in the future. ;)
What your turn is the aproximation that has been stored in the type. ZERO is one of the special constants of IEEE 754 format.
I do not know JAVA, but exist the type Extended in other languages.
I am trying to generate random integers within a range to sample a percentile of that range. For example: for range 1 to 100 I would like to select a random sample of 20%. This would result in 20 integers randomly selected for 100.
This is to solve an extremely complex issue and I will post solutions once I get this and a few bugs worked out. I have not used many math packages in java so I appreciate your assistance.
Thanks!
Put all numbers in a arraylist, then shuffle it. Take only the 20 first element of the arraylist:
ArrayList<Integer> randomNumbers = new ArrayList<Integer>();
for(int i = 0; i < 100; i++){
randomNumbers.add((int)(Math.random() * 100 + 1));
}
Collections.shuffle(randomNumbers);
//Then the first 20 elements are your sample
If you want 20 random integers between 1 and one hundred, use Math.random() to generate a value between 0 and 0.999... Then, manipulate this value to fit your range.
int[] random = new int[20];
for(int i =0; i< random.length;i++)
{
random[i] = (int)(Math.random()*100+1);
}
When you multiply Math.random() by 100, you get a value between 0 and 99.999... To this number you add 1, yielding a value between 1.0 and 100.0. Then, I typecasted the number to an integer by using the (int) typecast. This gives a number between 1 and 100 inclusive. Then, store the values into an array.
If you are willing to go with Java 8, you could use some features of lambdas. Presuming that you aren't keeping 20% of petabytes of data, you could do something like this (number is the number of integers in the range to get) it isn't efficient in the slightest, but it works, and is fun if you'd like to do some Java 8. But if this is performance critical, I wouldn't recommend it:
public ArrayList<Integer> sampler(int min, int max, int number){
Random random = new Random();
ArrayList<Integer> generated = new ArrayList<Integer>();
IntStream ints = random.ints(min,max);
Iterator<Integer> it = ints.iterator();
for(int i = 0; i < number; i++){
int k = it.next();
while(generated.contains(k)){
k = it.next();
}
generated.add(k);
}
ints.close();
return generated;
}
If you really need to scale to petabytes of data, you're going to need a solution that doesn't require keeping all your numbers in memory. Even a bit-set, which would compress your numbers to 1 byte per 8 integers, wouldn't fit in memory.
Since you didn't mention the numbers had to be shuffled (just random), you can start counting and randomly decide whether to keep each number or not. Then stream your result to a file or wherever you need it.
Start with this:
long range = 100;
float percentile = 0.20f;
Random rnd = new Random();
for (long i=1; i < range; i++) {
if (rnd.nextFloat() < percentile) {
System.out.println(i);
}
}
You will get about 20 percent of the numbers from 1 to 100, with no duplicates.
As the range goes up, the accuracy will too, so you really wouldn't need any special logic for large data sets.
If an exact number is needed, you would need special logic for smaller data sets, but that's pretty easy to solve using other methods posted here (although I'd still recommend a bit set).
Im implementing 2 algorithms for the TSP which uses a class which includes the routes, their cost, etc. At the minute it uses random values which is fine, although I now need to compare the algorithms so to make this fair I need to make the inputs the same (which is obviously unlikely to happen when using random inputs!) The issue im having is I dont know how to change it from random values to inserting pre-determined values into the 2D array, not just that but I also dont know how to calculate the costs of these values.
Randomly generates node values:
Random rand = new Random();
for (int i=0; i<nodes; i++) {
for (int j=i; j<nodes; j++) {
if (i == j)
Matrix[i][j] = 0;
else {
Matrix[i][j] = rand.nextInt(max_distance);
Matrix[j][i] = Matrix[i][j];
}
}
}
Im assuming for the above a declare a matrix of say [4][4] and then int matrix [][] = insert values ?
I do not help with some other sections of this class but I think I need to make sure this part is right before asking anymore!
Thanks a lot in advance!
you can do initialization of 2D array like this:
double matrix[][] = { { v1, v2, ..., vn }, { x1, x2, ..., xn }, ..., { y1, y2, ..., yn } };
where each inner {} represents the outter (first) index and each inner element represents the innermost (second) intex.
Example: to acess element x1 you do this:
matrix[1][0];
This is the answer that you asked for, but I still think that it's better to use the same set of random values for both algorithms, Jon Taylor showed a good way for doing that. The code to set the seed looks like this:
int seed = INTEGER_VALUE;
Random rand = new Random(seed);
this way you will ever get the same set of values.
You could set a seed instead for each random number generator therefore guaranteeing that for each implementation you test, the same sequence of pseudo-random numbers is being created.
This would save the effort of manually entering lots of values.
Edit to show seed method:
Random r = new Random(56);
Every time r is created with the seed of 56 it will produce the exact same sequence of random numbers. Without a seed I believe the seed is defaulted to the system time (giving the illusion of truly random numbers).
I have a sorted list of ratios, and I need to find a "bin size" that is small enough so that none of them overlap. To put it shortly, I need to do what the title says. If you want a little background, read on.
I am working on a graphical experiment that deals with ratios and the ability of the eye to distinguish between these ratios quickly. So when we are forming these experiments, we use flashes of dots with various ratios chosen from dot bins. A bin is just a range of possible ratios with the mentioned array elements in the center. All dot bins need to be the same size. This means that we need to find the elements in the array that are nearest each other. Keep in mind that the array is sorted.
Can anyone think of a quick cool way to do this? I have never been particularly algorithmically inclined, so right now I am just running through the array backwards and subtracting the next element from the current one and checking that against a sum. Thanks
private double findNumerostyBinRangeConstant(double[] ratios) {
int minI = 0;
double min = 0;
for (int i = ratios.length -1; i > 0; i--) {
if (ratios[i] - ratios[i-1] > min) {
min = ratios[i] - ratios[i-1];
minI = i;
}
}
return Math.sqrt(ratios[minI]/ratios[minI - 1]); //Essentiall a geometric mean. Doesn't really matter.
}
Forward moving functions, fixed some logic issues that you had. Since you are looking for the minimum double, your initial comparison variable should start at the max. Removed the comparison by subtraction because you weren't using it later, replaced it with the division.
Note: Haven't tested the fringe cases, including zeros and negatives.
private double findNumerostyBinRangeConstant(double[] ratios) {
double result = Double.MAX_VALUE;
for (int i = 0; i<ratios.length-1; i++) {
if (ratios[i+1]/ratios[i] < result){
result = ratios[i+1]/ratios[i];
}
}
return Math.sqrt(result);
}
Only change: flipped the array search to go in increasing direction -- many architectures prefer going in positive direction. (Some don't.) Haven't verified that I didn't introduce an off-by-one error. (Sorry.)
private double findNumerostyBinRangeConstant(double[] ratios) {
int minI = 0;
double min = Double.MAX_VALUE;
for (int i = 0; i <= ratios.length-1; i++) {
if (ratios[i+1] - ratios[i] < min) {
min = ratios[i+1] - ratios[i];
minI = i;
}
}
return Math.sqrt(ratios[minI+1]/ratios[minI]);
}