Interpret infinity in a knapsack task - java

We have a algorithm with a recursive formula,for a knapsack problem with asymptotic O((n^2)*Vmax),where n - count of items,Vmax - max of Values :
A[i,x] - min total size needed to achieve value >= x , while using only the first i items.
A[i,x] = min{A[i-1,x] , A[i-1][x-v[i]]+w[i]}, where A[i-1][x-v[i]] = 0 if v[i]>=x;Base case : A[0,x] = 0 if x == 0 and plus inf. otherwise;
For a +inf. in java I use a Integer.Max_VALUE. When algorithms run there is a call : A[0,1] , A[0,2] ... And matrix filled negative numbers (int overflow)
How I can to inŠµterpretet infinity and infinity plus number?

If you are not storing infinity nowhere, and use it just for comparison, you can use double infinity, which is larger than int and long max value.
Double.POSITIVE_INFINITY
if you are storing the numbers, and you still overflow you should store the numbers as long in the array, and compare then to integer max value, this way you will know the number is bigger than your max value without overflowing. The cons of this are, slower execution and bigger memory consumption.

If you know the maximum value of your input, try with a value slightly larger than that value. That should do the trick.

Related

Explanation about a result

Hi i hav a little problem about some code that i can't give an explanation about the result i have.
//what happens?
public static void what() {
int number = 2147483647;
System.out.println(number + 33);
}
//Here is my solution for the probleme
public static void what() {
long number = 2147483647;
System.out.println(number + 33);
}
The first code with the int number as variable gives me -2147483616 as result. So when i change the int to long i get the good result expected. So question is who can help me give and explanation of why int number + 33 = -2147483616
Java integers are based on 32 Bits. The first bit is kept for the sign (+ = 0 / - = 1).
So 2147483647 equals 01111111 11111111 11111111 11111111.
Adding more will force the value to turn to negative because the first bit is turned into a 1.
10000000 00000000 00000000 00000000 equals -2147483648.
The remaining 32 you are adding to -2147483648 brings you to your result of -2147483616.
The primitive int type has a maximum value of 2147483647, which is what you are setting number to. When anything is added to this value the int type cannot represent it correctly and 'wraps' around, becoming a negative number.
The maximum value of the long type is 9223372036854775807 so the second code snippet works fine because long can hold that value with no problem.
You have reached the maximum of the primitive type int (2147483647).
If int overflows, it goes back to the minimum value (-2147483648) and continues from there.
Consider the calculation of the second snippet, and what the result actually means.
long number = 2147483647;
number += 33;
The result in decimal is 2147483680, in hexadecimal (which more easily shows what the value means) it is 0x80000020.
For the first snippet, the result in hexacimal is also 0x80000020, because the result of arithmetic with the int type is the low 32 bits of the "full" result. What's different is the interpretation: as an int, 0x80000020 has the top bit set, and the top bit has a "weight" of -231, so this result is interpreted as -231 + 32 (a negative number). As a long, the 32nd bit is just a normal bit with a weight of 231 and the result is interpreted as 231 + 32.
The primitive type int is a 32-bit integer that can only store from -2^31 to 2^31 - 1 whereas long is a 64-bit integer so it can obviously store a much larger value.
When we calculate the capacity of int, it goes from -2147483648 to 2147483647.
Now you are wondering.. why is it that when the number exceeds the limit and I add 33 to it, it will become -2147483616?
This is because the data sort of "reset" after exceeding its limit.
Thus, 2147483647 + 1 will lead to -2147483648. From here, you can see that -2147483648 + 32 will lead to the value in your example which is -2147483616.
Some extra info below:
Unless you really need to use a number that is greater than the capacity of int, always use int as it takes up less memory space.
Also, should your number be bigger than long, consider using BigInteger.
Hope this helps!

How do I make only "n" comparisons finding min and max from text file?

So I have a file that has n number of integers in it. I need to find a way to make n comparisons when finding the min and max instead of 2n comparisons. My current code makes 2n comparisons...
min=max=infile.nextInt();
while ( infile.hasNextInt() )
{
int placeholder = infile.nextInt(); // works as a placeholders
if (placeholder < min)
{
min = placeholder;
}
if (placeholder > max)
{
max = placeholder;
NOTE: I can only change what is in the while loop. I just do not know how I would easily find the min and max using a basic for loop... Is there any simple solution to this? What am I missing?
I don't think you can do this in n comparisons. You can do it in 3n/2 - 2 comparisons as follows:
Take the items in pairs and compare the items in each pair. Put the higher values from each comparison in one list, and the lower values in another. That takes n/2 comparisons.
Find the maximum from the higher-values list: n/2-1 comparisons.
Find the minimum from the lower-values list: n/2-1 comparisons.
As MadPhysicist stated: you could change the two if's into an if/else if:
min = max = infile.nextInt();
while (infile.hasNextInt()) {
int placeholder = infile.nextInt(); // works as a placeholders
if (placeholder <= min) {
min = placeholder;
} else if (placeholder > max) {
max = placeholder;
}
}
In the best case (a strictly decreasing sequence, where each value is smaller than the previous one) you only need n-1 comparisons.
In the worst case (a strictly increasing sequence, where each value is larger than the previous one) you will still need 2*(n-1) comparisons. You cannot completely eliminate this case: if a value is larger than the current minimum it could be a new maximum value.
The typical case (a random sequence of values) you will need something between n-1 and 2*(n-1) comparisons.
Also note that I changed the comparison for the minimum value from < to <=: if a value is equal to the minimum value it cannot be a new maximum value at the same time.
I think your approach is optimal because it takes O(n) comparisons. n or 2n is not important according to Big O:
int min = Integer.MAX_VALUE;
int max = Integer.MIN_VALUE;
while (infile.hasNextInt()) {
int val = infile.nextInt();
if(val < min)
min = val;
else if(val > max)
max = val;
}
You can do the same using additional storage, but in this case you have less comparisons but additional space:
TreeSet<Integer> unique = new TreeSet<>();
while(infile.hasNextInt())
unique.add(infile.nextInt());
int min = unique.pollFirst();
int max = unique.pollLast();
Given that you initialize the min/max to the first element, you aleady have 2(n - 1) comparisons. Furthermore, if you change the two ifs into an if-else if, you will save at least one more comparison, for a total of 2n - 3.
A generalization of #Matt Timmermans' answer is therefore possible:
Split your input into groups of size k. Find the max and min of each group using 2k - 3 comparisons. This leaves you with n/k items to check for a minimum and n/k items to check for a maximum. You have two options:
Just make the comparisons, for a total of (n/k) * (2k - 3) + 2 * (n/k - 1). This shows that Matt's answer is optimal, since the expression is smallest for k = 2 (the fraction reduces to something over k for all values of n).
Continue splitting into groups of size k (or some other size). Finding the maximum of k elements requires k-1 comparisons. So you can split your n/k minimum candidates into groups of k again to get n/k2 candidates for an additional n/k * (k-1) comparisons. You can continue the process go get a total of (n/k) * (2k - 3) + 2 * (k - 1) * Ī£ n/ki. The sum evaluates to 1 / (k-1), so the total is > 2n, even compensating the hand-waving over-approximation implicit in the sum.
The reason that approach #2 does not reduce the number of comparisons is that the most gain is to be had from splitting the list into two sets of candidates for each criterion. The remainder of the calculation is best optimized through a single pass through each list.
The moral of the story is that while you can save a couple of comparisons here and there, you probably shouldn't. You have to consider the amount of overhead you incur with setting up additional lists (or even doing in-place swapping), as well as the reduced legibility of your code, among other factors.

Java while loop printing squared numbers can't use int?

In Java, create a do-while loop that starts at 2, and displays the number squared on each line while the number is less than 1,000,000. And this is what I have:
int k = 2;
do {
System.out.println(k);
k *= k;
} while(k < 1000000);
The problem is the output, somehow it is getting stuck at 0 and infinitely looping through 0 to print those out? I don't believe it is due to the fact that the number is out of int range, since a 32 bit number's range is around +/- 2 billion... But when I switch up the data type of k to be long everything works fine... Why is this?
It really is due to int. The sequence produced this way is
2
4
16
256
65536
0
And then it remains zero. Note that it never rises above 1000000.
With a long, the number after 65536 would be 4294967296 (which does not fit in an int, but does fit in a long), so it stops.
This is perhaps more obvious in hexadecimal, the sequence then reads (with sufficiently long integers)
2
4
0x10
0x100
0x10000
0x100000000
An int can only keep the lowest 8 hexadecimal digits, so 0x100000000 becomes 0.
Your code to print squares should be
int k = 2;
do {
System.out.println(k*k);
k++;
}while(k < 1000000);
As you are storing the result in same variable, your output is growing exponentially

Issue with saturation after calculations in Java

I'm creating a Reverse Polish Calculator and am having issues with saturation. I've implemented a stack, and have found the largest number I can get to without having the issue is 2147483647. So if I push this number to the stack, then add 1, the result I get is -2147483648 (negative). What I need to do is instead of returning this negative number, return the original number 2147483647. Basically have this as a limit. The same applies to the negative side of things, where the limit is -2147483648. Let me know if I have missed any info or you need to see code.
The largest int value is 2147483647 and the smallest is -2147483648. int values wrap around, so when you add 1 to 2147483647 you get -2147483648 and when you subtract 1 from -2147483648 you get 2147483647.
If you don't want this behaviour you can do
int a = 2147483647;
a += 1.0; // a is still 2147483647
or
int a = 2147483647;
int b = (int) (a + 1.0); // b is also 2147483647
These work because a + 1.0 is calculated using double (no overflow) and the result is converted back to an int using the rule that numbers bigger than Integer.MAX_VALUE just become Integer.MAX_VALUE.

Issue with implementation of Fermat's little therorm

Here's my implementation of Fermat's little theorem. Does anyone know why it's not working?
Here are the rules I'm following:
Let n be the number to test for primality.
Pick any integer a between 2 and n-1.
compute a^n mod n.
check whether a^n = a mod n.
myCode:
int low = 2;
int high = n -1;
Random rand = new Random();
//Pick any integer a between 2 and n-1.
Double a = (double) (rand.nextInt(high-low) + low);
//compute:a^n = a mod n
Double val = Math.pow(a,n) % n;
//check whether a^n = a mod n
if(a.equals(val)){
return "True";
}else{
return "False";
}
This is a list of primes less than 100000. Whenever I input in any of these numbers, instead of getting 'true', I get 'false'.
The First 100,008 Primes
This is the reason why I believe the code isn't working.
In java, a double only has a limited precision of about 15 to 17 digits. This means that while you can compute the value of Math.pow(a,n), for very large numbers, you have no guarantee you'll get an exact result once the value has more than 15 digits.
With large values of a or n, your computation will exceed that limit. For example
Math.pow(3, 67) will have a value of 9.270946314789783e31 which means that any digit after the last 3 is lost. For this reason, after applying the modulo operation, you have no guarantee to get the right result (example).
This means that your code does not actually test what you think it does. This is inherent to the way floating point numbers work and you must change the way you hold your values to solve this problem. You could use long but then you would have problems with overflows (a long cannot hold a value greater than 2^64 - 1 so again, in the case of 3^67 you'd have another problem.
One solution is to use a class designed to hold arbitrary large numbers such as BigInteger which is part of the Java SE API.
As the others have noted, taking the power will quickly overflow. For example, if you are picking a number n to test for primality as small as say, 30, and the random number a is 20, 20^30 = about 10^39 which is something >> 2^90. (I took the ln of 10^39).
You want to use BigInteger, which even has the exact method you want:
public BigInteger modPow(BigInteger exponent, BigInteger m)
"Returns a BigInteger whose value is (this^exponent mod m)"
Also, I don't think that testing a single random number between 2 and n-1 will "prove" anything. You have to loop through all the integers between 2 and n-1.
#evthim Even if you have used the modPow function of the BigInteger class, you cannot get all the prime numbers in the range you selected correctly. To clarify the issue further, you will get all the prime numbers in the range, but some numbers you have are not prime. If you rearrange this code using the BigInteger class. When you try all 64-bit numbers, some non-prime numbers will also write. These numbers are as follows;
341, 561, 645, 1105, 1387, 1729, 1905, 2047, 2465, 2701, 2821, 3277, 4033, 4369, 4371, 4681, 5461, 6601, 7957, 8321, 8481, 8911, 10261, 10585, 11305, 12801, 13741, 13747, 13981, 14491, 15709, 15841, 16705, 18705, 18721, 19951, 23001, 23377, 25761, 29341, ...
https://oeis.org/a001567
161038, 215326, 2568226, 3020626, 7866046, 9115426, 49699666, 143742226, 161292286, 196116194, 209665666, 213388066, 293974066, 336408382, 376366, 666, 566, 566, 666 2001038066, 2138882626, 2952654706, 3220041826, ...
https://oeis.org/a006935
As a solution, make sure that the number you tested is not in this list by getting a list of these numbers from the link below.
http://www.cecm.sfu.ca/Pseudoprimes/index-2-to-64.html
The solution for C # is as follows.
public static bool IsPrime(ulong number)
{
return number == 2
? true
: (BigInterger.ModPow(2, number, number) == 2
? (number & 1 != 0 && BinarySearchInA001567(number) == false)
: false)
}
public static bool BinarySearchInA001567(ulong number)
{
// Is number in list?
// todo: Binary Search in A001567 (https://oeis.org/A001567) below 2 ^ 64
// Only 2.35 Gigabytes as a text file http://www.cecm.sfu.ca/Pseudoprimes/index-2-to-64.html
}

Categories