Why double in Java has a specific range of values from ±5,0*10(^-324) to ±1,7*10(^308)? I mean why it's not like ±5,0*10(^-324) to ±5,0*10(^308) or ±1,7*10(^-324) to ±1,7*10(^308)?
Answer to your question is subnormal numbers, check following link
https://en.wikipedia.org/wiki/Denormal_number
Double floating point numbers in Java are based on the format defined in IEEE 754.
See this link for the explanation.
https://en.wikipedia.org/wiki/Double-precision_floating-point_format
Following is a simple set of rules
Floating point number is represented in 64 bits
64 bits are divided in following
Sign bit: 1 bit (sign of the number)
Exponent: 11 bits (signed)
Significand precision (Fraction): 52 bits
Number range that we get from this setup is
-1022 <= Exponent <= 1023 (total 2046) (excluding 0 and 2047, they have special meanings)
000 (0 in base 16) is used to represent a signed zero (if F=0) and subnormals (if F≠0); and
7ff (2047 in base 16) is used to represent ∞ (if F=0) and NaNs (if F≠0),
https://en.wikipedia.org/wiki/Exponent_bias
and
-2^52 <= Fraction <= 2^52
So the minimum and maximum numbers that can be represented are
Min positive double = +1 * 2^(-1022) ≈ 2.225 * 10(−308)
Note: 1022 * Math.log(2) / Math.log(10) = 307.652
and Math.pow(10, 1 - .652) = 2.228 (.652 is approximation)
Max positive double = +(2^52) * (2^1023) = 1.797 * 10^308
So the range becomes [-2.225 * 10(−308), 1.797 * 10^308]
This range changes due to subnormal numbers
Subnormal number is a number that is smaller than the minimum normal
number defined by the specification.
If I have a number 0.00123 it would be represented as 1.23 * 10^(-3). Floating point numbers by specification don't have leading zeroes. So If there's a number with leading zeros, it adds to the default Exponent. So If I have a number with minimum exponent possible with leading zeroes, leading zeros will add to the negative exponent.
There are 52 bits for the signifand (fraction) so maximum number of leading zeros in binary can be 51. which effectively produce following number.
Min positive Subnormal = 1 * 2^-52 * (2^-1022) = 2^(-2074) ≈ 4.9·10^(−324)
Note: 1074 * Math.log(2) / Math.log(10) = 323.306
Math.pow(10, 1 - 0.306) = 4.943
So there you have it, range is now
[- Min subnormal number, + Max normal number]
or
[- 4.9 * 10^(−324), + 1.79769 *10^308]
Related
I'm studying the float-point type and the examples is a declaration of a variable float expressed as an hexadecimal
float f_in_hex = Ox1.59a8f6p8f
This is the computation to find the float value:
(1 * 16^0 + 5 * 16^-1 + 9 * 16^-2 + 10 * 16^-3 + 8 * 16^-4 + 15 * 16^-5 + 6 * 16^-6) * 2^8
So, I know what is the prefix Ox, that base is 16 but I still don't understand why the exponential part start from 0 and goes with negative values
it's negative value because it's after the decimal point
16^(-1) is the same as 1/16 = .0625
if it was positive exponent it would be a big number.
hope you understand what i mean
that is not a hexadecimal number, firstly Ox is the letter O and it is not a zero, then within the supposed hexadecimal numbers there is a letter p, the hexadecimal numbers only cover from a-f
How come? I thought that "+1" is the lowest number it can generate... This is the question:
"(int) Math.random()*(65535 + 1) returns a random number between:
Between 0 and 65535. <- answer
This is a question from a sololearn challenge.
The documentation of method Math.random() says:
Returns a double value with a positive sign, greater than or equal to 0.0 and less than 1.0.
It's obvious - mathematically expressed, the generated interval is <0, 1). It means, the generated number will never reach 1.0 but maximally a number a bit below (ex. 0.99). Since you multiply it with 65535, it will never reach 65535. That's why you have to add +1.
I recommend you to use the class Random and it's method nextInt(int bound) which does:
Returns a pseudorandom, uniformly distributed int value between 0 (inclusive) and the specified value (exclusive)
Therefore:
Random random = new Random();
int integer = random.nextInt(65536); // 65535 + 1 because the number is exclusive
The way you have the code right now:
(int) Math.random()*(65535 + 1)
You will always get 0.
The Math.random() method generates a number in the range [0, 1).
Returns a double value with a positive sign, greater than or equal to 0.0 and less than 1.0.
When you multiply that number by n, it has a range of [0, n). Casting it to int truncates any decimal portion of the number, making that number 0, and anything multiplied with 0 is 0. The cast occurs first because it's a higher precedence than multiplication.
Let's add parentheses so the cast occurs after the multiplication.
(int) (Math.random()*(65535 + 1))
When you multiply the truncated number by n, it has a range of [0, n). Casting it to int after the multiplication truncates any decimal portion of the number, making the range of integers 0 through (n - 1).
If you add 1 after multiplying and casting, then the lowest number it could generate would be 1. The range before adding would be 0 through 65534, after adding it would be 1 through 65535.
(int) (Math.random()*65535) + 1
How come? I thought that "+1" is the lowest number it can generate...
That is because the +1 was placed within the brackets. See below:
(int) Math.random()*(65535 + 1) //is equivalent to
(int) Math.random()*(65536) //which is equivalent to
(int) 0.0 to number < 1.0 *(65536) //which gives you a range of..
(int) (0 * 65536) to (0.999.. * 65536) //which gives you..
(int) 0 to 65535.34464.. //converted to int gives you
0 to 65535
If you want the minimum random number to be at least 1. Add it after the random operation is done:
(int) (Math.random()*65535) + 1
I'm using BigDecimal for my numbers in my application, for example, with JPA. I did a bit of researching about the terms 'precision' and 'scale' but I don't understand what are they exactly.
Can anyone explain me the meaning of 'precision' and 'scale' for a BigDecimal value?
#Column(precision = 11, scale = 2)
Thanks!
A BigDecimal is defined by two values: an arbitrary precision integer and a 32-bit integer scale. The value of the BigDecimal is defined to be .
Precision:
The precision is the number of digits in the unscaled value.
For instance, for the number 123.45, the precision returned is 5.
So, precision indicates the length of the arbitrary precision integer. Here are a few examples of numbers with the same scale, but different precision:
12345 / 100000 = 0.12345 // scale = 5, precision = 5
12340 / 100000 = 0.1234 // scale = 4, precision = 4
1 / 100000 = 0.00001 // scale = 5, precision = 1
In the special case that the number is equal to zero (i.e. 0.000), the precision is always 1.
Scale:
If zero or positive, the scale is the number of digits to the right of the decimal point. If negative, the unscaled value of the number is multiplied by ten to the power of the negation of the scale. For example, a scale of -3 means the unscaled value is multiplied by 1000.
This means that the integer value of the ‘BigDecimal’ is multiplied by .
Here are a few examples of the same precision, with different scales:
12345 with scale 5 = 0.12345
12345 with scale 4 = 1.2345
…
12345 with scale 0 = 12345
12345 with scale -1 = 123450 †
BigDecimal.toString:
The toString method for a BigDecimal behaves differently based on the scale and precision. (Thanks to #RudyVelthuis for pointing this out.)
If scale == 0, the integer is just printed out, as-is.
If scale < 0, E-Notation is always used (e.g. 5 scale -1 produces "5E+1")
If scale >= 0 and precision - scale -1 >= -6 a plain decimal number is produced (e.g. 10000000 scale 1 produces "1000000.0")
Otherwise, E-notation is used, e.g. 10 scale 8 produces "1.0E-7" since precision - scale -1 equals is less than -6.
More examples:
19/100 = 0.19 // integer=19, scale=2, precision=2
1/1000 = 0.0001 // integer=1, scale = 4, precision = 1
Precision: Total number of significant digits
Scale: Number of digits to the right of the decimal point
See BigDecimal class documentation for details.
Quoting Javadoc:
The precision is the number of digits in the unscaled value.
and
If zero or positive, the scale is the number of digits to the right of the decimal point. If negative, the unscaled value of the number is multiplied by ten to the power of the negation of the scale. For example, a scale of -3 means the unscaled value is multiplied by 1000.
Precision is the total number of significant digits in a number.
Scale is the number of digits to the right of the decimal point.
Examples:
123.456 Precision=6 Scale=3
10 Precision=2 Scale=0
-96.9923 Precision=6 Scale=4
0.0 Precision=1 Scale=1
Negative Scale
For a negative scale value, we apply the following formula:
result = (given number) * 10 ^ (-(scale value))
Example
Given number = 1234.56
scale = -5
-> (1234.56) * 10^(-(-5))
-> (1234.56) * 10^(+5)
-> 123456000
Reference: https://www.logicbig.com/quick-info/programming/precision-and-scale.html
From your example annotation the maximum digits is 2 after the decimal point and 9 before (totally 11):
123456789,01
I would like to check if the result is measurable; that is, whether it has a finite number if decimal places. What do i mean?
double x = 5.0 / 9.0; // x = 0.(5)
x is not measurable.
I want to round x to the second digit ( x = 0.56 ), but in such case:
double x = 1.0 / 8.0; // x = 0.125
I don't want to round anything.
So here is my question. How do i decide if the result can be measured or not?
You cannot. That is the reason, why 1.0 / 3 / 100 * 3 * 100 gives you 0.9999...9. You only have so many bits to represent the numbers. You cannot distinguish between the period
1.0 / 3 and a number that actually has 0.3333.....3 as value
The only fractions which will be exactly represented in a binary will be ones where the denominator is a power of two. If your input is two integers for the numerator and denominator then find the prime factorisation of both and remove the common factors. Then check the only remaining factors on the denominator are power of 2. Say if we want to find 56 / 70 this is 2^3 * 7 / ( 2 * 5 * 7) removing common factors gives 2^2 / 5 so that will not work. But 63 / 72 = (7*3^2) / (2^3 * 3^2) = 7 / 2^3 so will be a terminating binary number
If your working in decimal then powers of 2 and 5 on the denominator will be allowed.
I'm trying to get the mantissa of a float (just to learn), but it isn't working as expected.
The mantissa of say 5.3 is 53, right? I tried this code:
System.out.println(Float.floatToIntBits(5.3f) & 0x7FFFFF);
It printed 2726298. Shouldn't it remove the exponent bits and leave 53? I tried plenty of things, but this always happens. What am I doing wrong here?
The formula for single precision following the IEEE standard is:
(-1)^sign + 1.Mantissa x 2^(Exponent - Bias)
So 5.3 base 10 is 101.0100110011001100110011 base 2
101.0100110011001100110011 = 1.010100110011001100110011 * 2^2
2^2 = 2^(exp - bias) having bias = 127 (according to the IEEE standard for single precision)
so: exp - 127 = 2 => exp = 129 base 10 or 10000001 base 2
Single precision table:
0 | 10000001 | 01010011001100110011001
Sign = 0
Exp = 129
Mantissa = 2726297
From the article IBM: Java's new math. Floating-point numbers (in Russian) the simplest way to get the mantissa is:
public static double getMantissa(double x) {
int exponent = Math.getExponent(x);
return x / Math.pow(2, exponent);
}