Java get Long larger than 1000000000 - java

in this simple code i can not get Long larger than 1000000000. lenght of that is 10 char and i want to get larger than such as 15 character.
long value = nextLong(rand,1000000000);
long nextLong(Random rng, long n) {
long bits, val;
do {
bits = (rng.nextLong() << 1) >>> 1;
val = bits % n;
} while (bits-val+(n-1) < 0L);
return val;
}

Your long constant is missing an L suffix:
long value = nextLong(rand,100000000000000L);
I want to get larger than such as 15 character.
Java's long has range of โ€“9223372036854775808 to 9223372036854775807 (18 full digits + top digit in the range 0..8), which is sufficient to cover the range that you need to cover. If you need 19 decimal digits or more, you would need to use BigInteger.

You should be able to use BigInt.
Import using:
import java.math.BigInteger;
declare like this:
BigInteger myBigInt = new BigInteger("123456789123456789");

Increase your limit value of 'n'. Since you are limiting the generated random value by performing a modulo 'n', obviously the generated value needs to be less than 'n'. Since your limit is a long, you can increase that limit to allow for 15 digit results without other changes.
However I am not sure what you are trying to accomplish with the loop in the nextLong function. It will only loop when bits > ( Long.MAX - n + 1 ).

I get the feeling that you're limiting yourself by your own modulo operation.
Remember that modulo division is the same as short division - the kind we used back in third grade. That is, instead of dividing out the entire number, we take the whole portion and the remainder.
So, let's take a simple example (a power of 10, since you're using one as well):
99 / 10 = 9 remainder 9
That is to say, if I divide 99 into 10 using short division, I will be able to divide it evenly 9 times, with 9 bits left over. Notice that the left-over is an order of magnitude shorter of what I'm dividing into.
This scales up with higher orders of divisors:
999 / 10 = 99 remainder 9
9999 / 10 = 999 remainder 9
99999 / 10 = 9999 remainder 9
...and so forth. Notice that our remainder is always an order of magnitude below our dividend. This makes sense, since if it were larger than our dividend, it'd be another value we could add to the quotient, and not the remainder.
Now, we come back to your example. You're taking a long value, which can be several orders of magnitude larger or smaller than your passed in value of a billion (which fits fine into an int, and is promoted to a long when you call your method).
The ultimate issue comes down to this:
val = bits % n;
...where bits is some arbitrary long value that could be greater than n.
Remember what we discovered above with the short division above? That's right - your resulting val will be an order of magnitude below your n value - that is to say, it will never be larger than or equal to n.
I'm not entirely sure what it is you're trying to accomplish, so I don't have The Right Thingโ„ข for you to do. But I'd recommend that you re-evaluate the purpose of that modulo operation.

Related

Long Representation vs Double representation of positive and negative zero in java

I was wondering about the differences between positive and negative zero in different numeric types.
I understand the IEEE-754 for floating point arithmetic and bit representation in double precision so the following didn't come as a surprise
double posz = 0.0;
double negz = -0.0;
System.out.println(Long.toBinaryString(Double.doubleToLongBits(posz)));
System.out.println(Long.toBinaryString(Double.doubleToLongBits(negz)));
// output
>>> 0
>>> 1000000000000000000000000000000000000000000000000000000000000000
What did surprise me and showed me that im clueless about the bit representation of long type in java is that even if i shift right (unsigned >>>) then the binary representation of both positive and negative zero is the same
long posz = 0L;
long negz = -0L;
for (int i = 63; i >= 0; i--) {
System.out.print((posz >>> i) & 1);
}
System.out.println();
for (int i = 63; i >= 0; i--) {
System.out.print((negz >>> i) & 1);
}
// output
>>> 0000000000000000000000000000000000000000000000000000000000000000
>>> 0000000000000000000000000000000000000000000000000000000000000000
so i am wondering what does java do from a bit representation when i write the following
long posz = 0L;
long negz = -0L;
Does the compiler understand that they are both zero and disregards the sign (and so assignes 0 to the sign bit) or is there other magic here?
or is there other magic here?
Yes. 2's complement.
2's complement is a bit magical. It accomplishes 2 major objectives. Before getting into that, let's first stew on the notion of negative zero for a moment.
Negative zero is kinda weird. Why does it exist at all?
Negative zero isn't actually a thing. Ask any mathematician "Hey, so, what's up with negative zero?" and they'll just look at you in befuddlement. It's not a thing. Mathematically, 0 and -0 are utterly identical. Not just 'nearly identical', but 100%, fully, in all possible ways, identical. We don't generally want our numbers to be capable of representing both 5.0 as well as 5.00 - as those two are entirely, 100%, identical. If you don't think that a value system ought to waste bits trying to differentiate between 5.0 and 5.00, then it's equally bizarro to want the ability to represent -0.0 and +0.0 as distinct entities.
So, wanting -0 in the first place is kinda weird. All the numeric primitives (long, int, short, byte, and I guess char which is technically numeric too) all cannot represent this number. Instead, long z = -0 boils down to:
Take the constant "0".
Apply the 'negate' operation to this number (- is a unary operator. Just like 2+5 makes the system calculate the binary operation of "addition" on elements 2 and 5, -x makes the system calculate the unary operation of "negation" on element x. Applying the negation operation to 0 produces 0. It's no different from writing, say, int x = 5 + 0;. That +0 part doesn't do anything. The - in front of -0 doesn't do anything. In contrast to -0.0 where it does do something (gets you negative zero, the double value, instead of positive zero).
Store this result in z (so, just 0 then).
There is no way to tell if that minus is there. They both result in ALL ZERO bits, and hence, there is no way for the computer to tell if you initialized that variable with the expression -0 or with +0. Again in contrast to double where as you noticed there's a bit different.
So why does double have it then?
Let's stew a bit on the notion of doubles and IEEE-754 math.
A double takes 64 bits. From basic pure mathematical principles then, a double is as incapable of representing more than 2^64 different possible values you are capable of breaking the speed of light or making 1+1=3.
And yet, a double aims to represent all numbers. There are way more numbers between 0 and 1 than 2^64 options (in fact, an infinite amount of numbers exist between 0 and 1), and that's just 0 to 1.
So, how doubles actually work is different. A few less than 2^64 numbers are chosen from the entire number line. Let's call these the blessed numbers.
The blessed numbers are not equally distributed. The closer you are to 1, the more blessed numbers exist. In other words, the distance between 2 blessed numbers increases as you move away from 1. For example, if you go from, say, 1e100 (a 1 with a hundred zeroes) and want to find the next blessed number, it's quite a ways. It's in fact higher than 1.0! - 1e100+1 is in fact 1e100 again, because the way double math works is that after every single last mathematical operation you to do them, the end result is rounded to the nearest blessed number.
Let's try it!
double d = 1e100;
System.out.println(d);
System.out.println(d + 1);
// prints: 1.0E100
// 1.0E100
But that means.. double values don't actually represent a single number!!. What any given double represents is in fact this concept:
An unknown number whose value lies between [D - ๐›ฟ, D + ๐›ฟ], where D is the blessed number that is closed to this unknown number this value represents, and, and ๐›ฟ is half of the distance between D and the next nearest blessed number on either side.
Given that usually ๐›ฟ is incredibly small, this is 'good enough'. But this weirdness does explain why you really, really do not want any business at all with double if accuracy is important (such as with currencies. Don't store those in doubles, ever!)
Given that, what does -0.0 represent? not actually just 0. It represents, specifically: An unknown number whose value lies between [-๐›ฟ, 0] where 0 is real zero (and this, has no sign), and ๐›ฟ is Double.MIN_VALUE: the smallest non-zero positive number representable with a double.
That's why -0.0 and +0.0 both exist: They are in fact different concepts. Rarely relevant, but sometimes it is. In contrast to e.g. long where 5 just means 5 and not "between 4.5 and 5.5", because longs fundamentally don't recognize that fractional parts exist in the first place. Given that 5 just means 5, then 0 just means 0, and there is no such thing as negative zero in the first place.
Now we get to 2's complement
2's complement is a cool system. It has two neat properties:
It only has the one zero.
It does not matter if you treat the bit sequence as signed-by-way-of-2s-complement or as unsigned, for the purposes of the operations: Addition, Substraction, Increment, Decrement, zero-check. The modifications you do to the bits to implement those operations is identical.
It DOES matter for greater than, less than, and divide.
2's complement works like this: To negate a number, take all bits and flip them (i.e. do a NOT operation on the bits). Then, add 1.
Let's try it!
int x = 5;
int y = -x;
for (int i = 31; i >= 0; i--) {
System.out.print((x >>> i) & 1);
}
System.out.println();
for (int i = 31; i >= 0; i--) {
System.out.print((y >>> i) & 1);
}
System.out.println();
// prints 00000000000000000000000000000101
// 11111111111111111111111111111011
As we can see, the 'flip all bits and add 1' algorithm was applied.
2s complement is, of course, reversible: If you do 'flip all bits and add 1' twice in a row you get the same number out.
Now let's try -0. 0 is 32 0 bits, then flip them all, then add 1:
00000000000000000000000000000000
11111111111111111111111111111111 // flip all
100000000000000000000000000000000 // add 1
00000000000000000000000000000000 // that 1 fell off
and because ints can only store 32 bits, that final '1' falls off of the end. And we're left with zero again.
Now let's go with bytes ( abit smaller) and try to add, say, 200 and 50 together.
11001000 // 200 in binary
00110010 // 50 in binary
-------- +
11111010 // 250 in binary.
now let's instead go: Oh wait, whoops, that was an error, actually these numbers are in 2s complement. That wasn't 200, nono. 11001000 is a bit sequence that actually means (let's apply the 'flip all bits, add 1' scheme: 00111000 - it's actually -56. So the operation was meant to represent '-56 + 50'. Which is -6. -6 in binary is (write out 6, flip bits, add 1):
00000110
11111001
11111010
hey now, look at that, nothing changed! It's the same result! So, when the computer does x + y, where x and y are numbers, the computer does not care. Whether x is "an unsigned number" or "a signed with 2s complement number", the operation is identical.
That's why 2s complement is applied. It makes math MUCH faster. The CPU doesn't have to futz about with branching out to deal with sign bits.
In this sense it is more correct to say that in java, int, long, char, byte and short are neither signed nor unsigned, they just are. At least for the purposes of +, -, ++, and --. No the idea that int is signed is fundamentally a property of e.g. System.out.println(int) - that method chooses to render the bitsequence 11111111111111111111111111111111 as "-1" instead of as 4294967296.
long has no such thing as negative zero. Only float and double have a different representation of positive and negative zero.

Can someone explain how reversing an integer using % 10 works?

To reverse an integer and put it into a list, one would do the following (where x is some integer):
int lastDigit = x;
while(lastDigit != 0)
{
list.add(lastDigit % 10);
lastDigit /= 10;
}
So if x was 502, 2 0 and 5 would get added to the list.
This is obviously really useful, but until yesterday I thought the only way to do something like this was by converting the int to a string first.
I'm not sure if this is just common knowledge but I had not seen this method before today. I would like to understand how it works instead of merely memorizing it.
Could someone explain why the number modulus 10 gives the last digit, and why dividing it by 10 gives the next digit on the next iteration? Why would it eventually equal 0?
The modulus operator gives you the remainder from doing a division calculation.
502 % 10 is 2 because 502/10 = 50 plus a remainder of 2.
Therefore the remainder in this calculation is 2, meaning 2 will be added to the list.
The division by ten in the next line is performed using integer arithmetic, so 502/10 gives a result of 50.
Any non-negative number less than 10 will give a result of zero, ending the loop.
Think of % 10 as getting the least significant (right most) digit in decimal system (hence 10).
And then think of / 10 as shifting all digits one place right (also decimal). You obviously have to do it until the number is 0. All remaining digits can be understood as leading zeros in this case.
In binary system you can also use the bitwise operations & 1 and >> 1 instead of modulo (% 2) and integer (/ 2) divisions.
The list append operation (here add) is the one that reverses the order. The operations above are just for extraction of the single digits.

Calculating number of bits and number of words BigInteger

While converting a String into BigInteger, Java internally calculates the number of bits and then the number of words(each word is a group of 9 integers i think) in a BigInteger as can be seen here from Line 325 to Line 327. numWords is used then to create an array that can accomodate that BigInteger.
I don't understand the logic used for calculating numBits in line 325 and then the logic for numWords in Line 326.
Logically i think that for the string "123456789", numWords should be 1 and for "12345678912",numWords should be 2 , but that's not always the case. For example for "12345678912345678912", numWords should be 3, but it comes out to be 2.
Can anyone please explain the logic used in line 325 and 326?
To represent decimal number of numDigits as binary number, it requires
numDigits * Math.log(10) / Math.log(2)
bits.
int numBits = (int)(((numDigits * bitsPerDigit[radix]) >>> 10) + 1);
In the calculation above bitsPerDigit[10] is 3402.
Math.log(10) / Math.log(2) * Math.pow(2, 10) = 3401.6543691646593
In Java, BigIntegers are not stored as strings or bytes with a digit each. They are stored as an array of 32-bit integers, which together form the so-called magnitude of the BigInteger. There can be no leading zero integers(*), so the BigInteger is stored as compactly as possible.
The "words" mentioned are these 32-bit integers. They are not groups of 9 digits, they are used in full, so each bit counts.
So you just have to know how many 32-bit integers are stored, which is the length of the internal array times 32. But the top integer can still have leading zeroes, so you must get the number of leading zeroes of that top integer and subtract them from the obtained product, in pseudo-code:
numBits = internalArray.length * 32 - numberOfLeadingZeroBits(internalArray[0]);
Note that the internal array is stored with the top integer at the lowest address (I have no idea why that is), so the top integer is at index 0 of the array.
(*) In reality, the above is a little more complicated, since the top item may be stored at an offset from the start of the array (probably to make certain calculations easier), but to understand the mechanism, you can pretend there are no extra integers.
Words doesn't refer to words as you know it - it's referring to words as memory blocks.
https://en.wikipedia.org/wiki/Word_(computer_architecture)

Check division by 3 with binary operations?

I've read this interesting answer about "Checking if a number is divisible by 3"
Although the answer is in Java , it seems to work with other languages also.
Obviously we can do :
boolean canBeDevidedBy3 = (i % 3) == 0;
But the interesting part was this other calculation :
boolean canBeDevidedBy3 = ((int) (i * 0x55555556L >> 30) & 3) == 0;
For simplicity :
0x55555556L = "1010101010101010101010101010110"
Nb
There's also another method to check it :
One can determine if an integer is divisible by 3 by counting the 1
bits at odd bit positions, multiply this number by 2, add the number
of 1-bits at even bit positions add them to the result and check if
the result is divisible by 3
For example :
9310 ( is divisible by 3)
010111012
It has 2 bits in the odd places and 4 bits at the even places ( place is the zero based of the base 2 digit location)
So 2*1 + 4 = 6 which is divisible by 3.
At first I thought those 2 methods are related but I didn't find how.
Question
How does
boolean canBeDevidedBy3 = ((int) (i * 0x55555556L >> 30) & 3) == 0;
โ€” actually determines if i%3==0 ?
Whenever you add 3 to a number, what you do is to add binary 11. Whatever the original value of the number, this will maintain the invariant that twice the number of 1 bits at odd positions, plus the number of 1 bits at even positions, will also be divisible by 3.
You can see that in this way. Let's call the value of the above expression c. You're adding 1 to an odd position, and 1 to an even position. When you add 1 to an even position, either the bit you've added 1 to was set or unset. If it was unset, you'll increase the value of c by 1, because you've added a new 1 in an odd position. If it was previously set, you'll flip that bit, but add a 1 in an even position (from the carry). This means that you initially decrease c by 1, but now when you add the 1 in the even position, you increase c by 2, so overall you've increased c by 2.
Of course, this carry bit might also get added to a bit that's already set, in which case we need to check that this part still increases c by 2: you'll remove a 1 in an even position (decreasing c by 2), and then add a 1 in an odd position (increasing c by 1), meaning that we've in fact decreased c by 1. That is the same as increasing c by 2, though, if we're working modulo 3.
A more formal version of this would be structured as a proof by induction.
The two methods do not appear to be related. The bit-wise method seems to be related to certain methods for the efficient computation of modulo b-1 when using digit base b, known in decimal arithmetic as "casting out nines".
The multiplication-based method is directly based on the definition of division when accomplished by multiplication with the reciprocal. Letting / denote mathematical division, we have
int_quot = (int)(i / 3)
frac_quot = i / 3 - int_quot = i / 3 - (int)(i / 3)
i % 3 = 3 * frac_quot = 3 * (i / 3 - (int)(i / 3))
The fractional portion of the mathematical quotient translates directly into the remainder of integer division: If the fraction is 0, the remainder is 0, if the fraction is 1/3 the remainder is 1, if the fraction is 2/3 the remainder is 2. This means we only need to examine the fractional portion of the quotient.
Instead of dividing by 3, we can multiply by 1/3. If we perform the computation in a 32.32 fixed-point format, 1/3 corresponds to 232*1/3 which is a number between 0x55555555 and 0x55555556. For reasons that will become apparent shortly, we use the overestimation here, that is the rounded-up result 0x555555556.
When we multiply 0x55555556 by i, the most significant 32 bits of the full 64-bit product will contain the integral portion of the quotient (int)(i * 1/3) = (int)(i / 3). We are not interested in this integral portion, so we neither compute nor store it. The lower 32-bits of the product is one of the fractions 0/3, 1/3, 2/3 however computed with a slight error since our value of 0x555555556 is slightly larger than 1/3:
i = 1: i * 0.55555556 = 0.555555556
i = 2: i * 0.55555556 = 0.AAAAAAAAC
i = 3: i * 0.55555556 = 1.000000002
i = 4: i * 0.55555556 = 1.555555558
i = 5: i * 0.55555556 = 1.AAAAAAAAE
If we examine the most significant bits of the three possible fraction values in binary, we find that 0x5 = 0101, 0xA = 1010, 0x0 = 0000. So the two most significant bits of the fractional portion of the quotient correspond exactly to the desired modulo values. Since we are dealing with 32-bit operands, we can extract these two bits with a right shift by 30 bits followed by a mask of 0x3 to isolate two bits. I think the masking is needed in Java as 32-bit integers are always signed. For uint32_t operands in C/C++ the shift alone would suffice.
We now see why choosing 0x55555555 as representation of 1/3 wouldn't work. The fractional portion of the quotient would turn into 0xFFFFFFF*, and since 0xF = 1111 in binary, the modulo computation would deliver an incorrect result of 3.
Note that as i increases in magnitude, the accumulated error from the imprecise representation of 1/3 affects more and more bits of the fractional portion. In fact, exhaustive testing shows that the method only works for i < 0x60000000: beyond that limit the error overwhelms the most significant fraction bits which represent our result.

Why does this random value have a 25/75 distribution instead of 50/50?

Edit: So basically what I'm trying to write is a 1 bit hash for double.
I want to map a double to true or false with a 50/50 chance. For that I wrote code that picks some random numbers (just as an example, I want to use this on data with regularities and still get a 50/50 result), checks their last bit and increments y if it is 1, or n if it is 0.
However, this code constantly results in 25% y and 75% n. Why is it not 50/50? And why such a weird, but straight-forward (1/3) distribution?
public class DoubleToBoolean {
#Test
public void test() {
int y = 0;
int n = 0;
Random r = new Random();
for (int i = 0; i < 1000000; i++) {
double randomValue = r.nextDouble();
long lastBit = Double.doubleToLongBits(randomValue) & 1;
if (lastBit == 1) {
y++;
} else {
n++;
}
}
System.out.println(y + " " + n);
}
}
Example output:
250167 749833
Because nextDouble works like this: (source)
public double nextDouble()
{
return (((long) next(26) << 27) + next(27)) / (double) (1L << 53);
}
next(x) makes x random bits.
Now why does this matter? Because about half the numbers generated by the first part (before the division) are less than 1L << 52, and therefore their significand doesn't entirely fill the 53 bits that it could fill, meaning the least significant bit of the significand is always zero for those.
Because of the amount of attention this is receiving, here's some extra explanation of what a double in Java (and many other languages) really looks like and why it mattered in this question.
Basically, a double looks like this: (source)
A very important detail not visible in this picture is that numbers are "normalized"1 such that the 53 bit fraction starts with a 1 (by choosing the exponent such that it is so), that 1 is then omitted. That is why the picture shows 52 bits for the fraction (significand) but there are effectively 53 bits in it.
The normalization means that if in the code for nextDouble the 53rd bit is set, that bit is the implicit leading 1 and it goes away, and the other 52 bits are copied literally to the significand of the resulting double. If that bit is not set however, the remaining bits must be shifted left until it becomes set.
On average, half the generated numbers fall into the case where the significand was not shifted left at all (and about half those have a 0 as their least significant bit), and the other half is shifted by at least 1 (or is just completely zero) so their least significant bit is always 0.
1: not always, clearly it cannot be done for zero, which has no highest 1. These numbers are called denormal or subnormal numbers, see wikipedia:denormal number.
From the docs:
The method nextDouble is implemented by class Random as if by:
public double nextDouble() {
return (((long)next(26) << 27) + next(27))
/ (double)(1L << 53);
}
But it also states the following (emphasis mine):
[In early versions of Java, the result was incorrectly calculated as:
return (((long)next(27) << 27) + next(27))
/ (double)(1L << 54);
This might seem to be equivalent, if not better, but in fact it introduced a large nonuniformity because of the bias in the rounding of floating-point numbers: it was three times as likely that the low-order bit of the significand would be 0 than that it would be 1! This nonuniformity probably doesn't matter much in practice, but we strive for perfection.]
This note has been there since Java 5 at least (docs for Java <= 1.4 are behind a loginwall, too lazy to check). This is interesting, because the problem apparently still exists even in Java 8. Perhaps the "fixed" version was never tested?
This result doesn't surprise me given how floating-point numbers are represented. Let's suppose we had a very short floating-point type with only 4 bits of precision. If we were to generate a random number between 0 and 1, distributed uniformly, there would be 16 possible values:
0.0000
0.0001
0.0010
0.0011
0.0100
...
0.1110
0.1111
If that's how they looked in the machine, you could test the low-order bit to get a 50/50 distribution. However, IEEE floats are represented as a power of 2 times a mantissa; one field in the float is the power of 2 (plus a fixed offset). The power of 2 is selected so that the "mantissa" part is always a number >= 1.0 and < 2.0. This means that, in effect, the numbers other than 0.0000 would be represented like this:
0.0001 = 2^(-4) x 1.000
0.0010 = 2^(-3) x 1.000
0.0011 = 2^(-3) x 1.100
0.0100 = 2^(-2) x 1.000
...
0.0111 = 2^(-2) x 1.110
0.1000 = 2^(-1) x 1.000
0.1001 = 2^(-1) x 1.001
...
0.1110 = 2^(-1) x 1.110
0.1111 = 2^(-1) x 1.111
(The 1 before the binary point is an implied value; for 32- and 64-bit floats, no bit is actually allocated to hold this 1.)
But looking at the above should demonstrate why, if you convert the representation to bits and look at the low bit, you will get zero 75% of the time. This is due to all values less than 0.5 (binary 0.1000), which is half the possible values, having their mantissas shifted over, causing 0 to appear in the low bit. The situation is essentially the same when the mantissa has 52 bits (not including the implied 1) as a double does.
(Actually, as #sneftel suggested in a comment, we could include more than 16 possible values in the distribution, by generating:
0.0001000 with probability 1/128
0.0001001 with probability 1/128
...
0.0001111 with probability 1/128
0.001000 with probability 1/64
0.001001 with probability 1/64
...
0.01111 with probability 1/32
0.1000 with probability 1/16
0.1001 with probability 1/16
...
0.1110 with probability 1/16
0.1111 with probability 1/16
But I'm not sure it's the kind of distribution most programmers would expect, so it probably isn't worthwhile. Plus it doesn't gain you much when the values are used to generate integers, as random floating-point values often are.)

Categories