I am trying to write a bitwise calculator in java, something that you could input an expression such as ~101 and it would give back 10 however when i run this code
import java.util.Scanner;
public class Test
{
public static void main(String[] args)
{
Integer a = Integer.valueOf("101", 2);
System.out.println(Integer.toString(~a,2));
}
}
it outputs -110 why?
You are assuming that 101 is three bits long. Java doesn't support variable length bit operations, it operates on a whole int of bits, so ~ will be the not of a 32 bit long "101".
--- Edited after being asked "How can I fix this?" ---
That's a really good question, but the answer is a mix of "you can't" and "you can achieve the same thing by different means".
You can't fix the ~ operator, as it does what it does. It would sort of be like asking to fix + to only add the 1's place. Just not going to happen.
You can achieve the desired operation, but you need a bit more "stuff" to get it going. First you must have something (another int) that specifies the bits of interest. This is typically called a bit mask.
int mask = 0x00000007; // just the last 3 bits.
int masked_inverse = (~value) & mask;
Note that what we did was really invert 32 bits, then zeroed out 29 of those bits; because, they were set to zero in the mask, which means "we don't care about them". This can also be imagined as leveraging the & operator such that we say "if set and we care about it, set it".
Now you will still have 32 bits, but only the lower 3 will be inverted. If you want a 3 bit data structure, then that's a different story. Java (and most languages) just don't support such things directly. So, you might be tempted to add another type to Java to support that. Java adds types via a class mechanism, but the built-in types are not changeable. This means you could write a class to represent a 3 bit data structure, but it will have to handle ints internally as 32 bit fields.
Fortunately for you, someone has already done this. It is part of the standard Java library, and is called a BitSet.
BitSet threeBits = new BitSet(3);
threeBits.set(2); // set bit index 2
threeBits.set(0); // set bit index 0
threeBits.flip(0,3);
However, such bit manipulations have a different feel to them due to the constraints of the Class / Object system in Java, which follows from defining classes as the only way to add new types in Java.
If a = ...0000101 (bin) = 5 (dec)
~a = ~...0000101(bin) = ...1111010(bin)
and Java uses "Two's complement" form to represent negative numbers so
~a = -6 (dec)
Now difference between Integer.toBinaryString(number) and Integer.toString(number, 2) for negative number is that
toBinaryString returns String in "Two's complement" form but
toString(number, 2) calculates binary form as if number was positive and add "minus" mark if argument was negative.
So toString(number, 2) for ~a = -6 will
calculate binary value for 6 -> 0000110,
trim leading zeros -> 110,
add minus mark -> -110.
101 in integer is actually represented as 00000000000000000000000000000101 negate this and you get 11111111111111111111111111111010 - this is -6.
The toString() method interprets its argument as a signed value.
To demonstrate binary operations its better to use Integer.toBinaryString(). It interprets its argument as unsigned, so that ~101 is output as 11111111111111111111111111111010.
If you want fewer bits of output you can mask the result with &.
Just to elaborate on Edwin's answer a bit - if you're looking to create a variable length mask to develop the bits of interest, you might want some helper functions:
/**
* Negate a number, specifying the bits of interest.
*
* Negating 52 with an interest of 6 would result in 11 (from 110100 to 001011).
* Negating 0 with an interest of 32 would result in -1 (equivalent to ~0).
*
* #param number the number to negate.
* #param bitsOfInterest the bits we're interested in limiting ourself to (32 maximum).
* #return the negated number.
*/
public int negate(int number, int bitsOfInterest) {
int negated = ~number;
int mask = ~0 >>> (32 - bitsOfInterest);
logger.info("Mask for negation is [" + Integer.toBinaryString(mask) + "]");
return negated & mask;
}
/**
* Negate a number, assuming we're interesting in negation of all 31 bits (exluding the sign).
*
* Negating 32 in this case would result in ({#link Integer#MAX_VALUE} - 32).
*
* #param number the number to negate.
* #return the negated number.
*/
public int negate(int number) {
return negate(number, 31);
}
Related
In the Random class, define a nextByte method that returns a value of the primitive type
byte. The values returned in a sequence of calls should be uniformly distributed over all the
possible values in the type.
In the Random class, define a nextInt method that returns a value of the primitive type
int. The values returned in a sequence of calls should be uniformly distributed over all the possible
values in the type.
(Hint: Java requires implementations to use the twos-complement representation for integers.
Figure out how to calculate a random twos-complement representation from four random byte
values using Java’s shift operators.)
Hi I was able to do part 3 and now I need to use 3. to solve 4. but I do not know what to do. I was thinking of using nextByte to make an array of 4 bytes then would I take twos complement of each so I wouldn't have negative numbers and then I would put them together into one int.
byte[] bytes = {42,-15,-7, 8} Suppose nextByte returns this bytes.
Then I would take the twos complement of each which i think would be {42, 241, 249, 8}. Is this what it would look like and why doesn't this code work:
public static int twosComplement(int input_value, int num_bits){
int mask = (int) Math.pow(2, (num_bits - 1));
return -(input_value & mask) + (input_value & ~mask);
}
Then I would use the following to put all four bytes into an int, would this work:
int i= (bytes[0]<<24)&0xff000000|
(bytes[1]<<16)&0x00ff0000|
(bytes[2]<< 8)&0x0000ff00|
(bytes[3]<< 0)&0x000000ff;
Please be as specific as possible.
The assignment says that Java already uses two's complement integers. This is a useful property that simplifies the rest of the code: it guarantees that if you group together 32 random bits (or in general however many bits your desired output type has), then this covers all possible values exactly once and there are no invalid patterns.
That might not be true of some other integer representations, which might only have 2³²-1 different values (leaving an invalid pattern that you would have to avoid) or have 2³² valid patterns but both a "positive" and a "negative" zero, which would cause a random bit pattern to have a biased "interpreted value" (with zero occurring twice as often as it should).
So that it not something for you to do, it is a convenient property for you to use to keep the code simple. Actually you already used it. This code:
int i= (bytes[0]<<24)&0xff000000|
(bytes[1]<<16)&0x00ff0000|
(bytes[2]<< 8)&0x0000ff00|
(bytes[3]<< 0)&0x000000ff;
Works properly thanks to those properties. By the way it can be simplified a bit: after shifting left by 24, there is no more issue with sign-extension, all the extended bits have been shifted out. And shifting left by 0 is obviously a no-op. So (bytes[0]<<24)&0xff000000 can be written as (bytes[0]<<24), and (bytes[3]<< 0)&0x000000ff as bytes[3]&0xff. But you can keep it as it was, with the nice regular structure.
The twosComplement function is not necessary.
The limit of int is from -2147483648 to 2147483647.
If I input
int i = 2147483648;
then Eclipse will prompt a red underline under "2147483648".
But if I do this:
int i = 1024 * 1024 * 1024 * 1024;
it will compile fine.
public class Test {
public static void main(String[] args) {
int i = 2147483648; // error
int j = 1024 * 1024 * 1024 * 1024; // no error
}
}
Maybe it's a basic question in Java, but I have no idea why the second variant produces no error.
There's nothing wrong with that statement; you're just multiplying 4 numbers and assigning it to an int, there just happens to be an overflow. This is different than assigning a single literal, which would be bounds-checked at compile-time.
It is the out-of-bounds literal that causes the error, not the assignment:
System.out.println(2147483648); // error
System.out.println(2147483647 + 1); // no error
By contrast a long literal would compile fine:
System.out.println(2147483648L); // no error
Note that, in fact, the result is still computed at compile-time because 1024 * 1024 * 1024 * 1024 is a constant expression:
int i = 1024 * 1024 * 1024 * 1024;
becomes:
0: iconst_0
1: istore_1
Notice that the result (0) is simply loaded and stored, and no multiplication takes place.
From JLS §3.10.1 (thanks to #ChrisK for bringing it up in the comments):
It is a compile-time error if a decimal literal of type int is larger than 2147483648 (231), or if the decimal literal 2147483648 appears anywhere other than as the operand of the unary minus operator (§15.15.4).
1024 * 1024 * 1024 * 1024 and 2147483648 do not have the same value in Java.
Actually, 2147483648 ISN'T EVEN A VALUE(although 2147483648L is) in Java. The compiler literally does not know what it is, or how to use it. So it whines.
1024 is a valid int in Java, and a valid int multiplied by another valid int, is always a valid int. Even if it's not the same value that you would intuitively expect because the calculation will overflow.
Example
Consider the following code sample:
public static void main(String[] args) {
int a = 1024;
int b = a * a * a * a;
}
Would you expect this to generate a compile error? It becomes a little more slippery now.
What if we put a loop with 3 iterations and multiplied in the loop?
The compiler is allowed to optimize, but it can't change the behaviour of the program while it's doing so.
Some info on how this case is actually handled:
In Java and many other languages, integers will consist of a fixed number of bits. Calculations that don't fit in the given number of bits will overflow; the calculation is basically performed modulus 2^32 in Java, after which the value is converted back into a signed integer.
Other languages or API's use a dynamic number of bits (BigInteger in Java), raise an exception or set the value to a magic value such as not-a-number.
I have no idea why the second variant produces no error.
The behaviour that you suggest -- that is, the production of diagnostic message when a computation produces a value that is larger than the largest value that can be stored in an integer -- is a feature. For you to use any feature, the feature must be thought of, considered to be a good idea, designed, specified, implemented, tested, documented and shipped to users.
For Java, one or more of the things on that list did not happen, and therefore you don't have the feature. I don't know which one; you'd have to ask a Java designer.
For C#, all of those things did happen -- about fourteen years ago now -- and so the corresponding program in C# has produced an error since C# 1.0.
In addition to arshajii's answer I want to show one more thing:
It is not the assignment that causes the error but simply the use of the literal.
When you try
long i = 2147483648;
you'll notice it also causes a compile-error since the right hand side still is an int-literal and out of range.
So operations with int-values (and that's including assignments) may overflow without a compile-error (and without a runtime-error as well), but the compiler just can't handle those too-large literals.
A: Because it is not an error.
Background: The multiplication 1024 * 1024 * 1024 * 1024 will lead to an overflow. An overflow is very often a bug. Different programming languages produce different behavior when overflows happen. For example, C and C++ call it "undefined behavior" for signed integers, and the behavior is defined unsigned integers (take the mathematical result, add UINT_MAX + 1 as long as the result is negative, subtract UINT_MAX + 1 as long as the result is greater than UINT_MAX).
In the case of Java, if the result of an operation with int values is not in the allowed range, conceptually Java adds or subtracts 2^32 until the result is in the allowed range. So the statement is completely legal and not in error. It just doesn't produce the result that you may have hoped for.
You can surely argue whether this behavior is helpful, and whether the compiler should give you a warning. I'd say personally that a warning would be very useful, but an error would be incorrect since it is legal Java.
I understand what the unsigned right shift operator ">>>" in Java does, but why do we need it, and why do we not need a corresponding unsigned left shift operator?
The >>> operator lets you treat int and long as 32- and 64-bit unsigned integral types, which are missing from the Java language.
This is useful when you shift something that does not represent a numeric value. For example, you could represent a black and white bit map image using 32-bit ints, where each int encodes 32 pixels on the screen. If you need to scroll the image to the right, you would prefer the bits on the left of an int to become zeros, so that you could easily put the bits from the adjacent ints:
int shiftBy = 3;
int[] imageRow = ...
int shiftCarry = 0;
// The last shiftBy bits are set to 1, the remaining ones are zero
int mask = (1 << shiftBy)-1;
for (int i = 0 ; i != imageRow.length ; i++) {
// Cut out the shiftBits bits on the right
int nextCarry = imageRow & mask;
// Do the shift, and move in the carry into the freed upper bits
imageRow[i] = (imageRow[i] >>> shiftBy) | (carry << (32-shiftBy));
// Prepare the carry for the next iteration of the loop
carry = nextCarry;
}
The code above does not pay attention to the content of the upper three bits, because >>> operator makes them
There is no corresponding << operator because left-shift operations on signed and unsigned data types are identical.
>>> is also the safe and efficient way of finding the rounded mean of two (large) integers:
int mid = (low + high) >>> 1;
If integers high and low are close to the the largest machine integer, the above will be correct but
int mid = (low + high) / 2;
can get a wrong result because of overflow.
Here's an example use, fixing a bug in a naive binary search.
Basically this has to do with sign (numberic shifts) or unsigned shifts (normally pixel related stuff).
Since the left shift, doesn't deal with the sign bit anyhow, it's the same thing (<<< and <<)...
Either way I have yet to meet anyone that needed to use the >>>, but I'm sure they are out there doing amazing things.
As you have just seen, the >> operator automatically fills the
high-order bit with its previous contents each time a shift occurs.
This preserves the sign of the value. However, sometimes this is
undesirable. For example, if you are shifting something that does not
represent a numeric value, you may not want sign extension to take
place. This situation is common when you are working with pixel-based
values and graphics. In these cases you will generally want to shift a
zero into the high-order bit no matter what its initial value was.
This is known as an unsigned shift. To accomplish this, you will use
java’s unsigned, shift-right operator,>>>, which always shifts zeros
into the high-order bit.
Further reading:
http://henkelmann.eu/2011/02/01/java_the_unsigned_right_shift_operator
http://www.java-samples.com/showtutorial.php?tutorialid=60
The signed right-shift operator is useful if one has an int that represents a number and one wishes to divide it by a power of two, rounding toward negative infinity. This can be nice when doing things like scaling coordinates for display; not only is it faster than division, but coordinates which differ by the scale factor before scaling will differ by one pixel afterward. If instead of using shifting one uses division, that won't work. When scaling by a factor of two, for example, -1 and +1 differ by two, and should thus differ by one afterward, but -1/2=0 and 1/2=0. If instead one uses signed right-shift, things work out nicely: -1>>1=-1 and 1>>1=0, properly yielding values one pixel apart.
The unsigned operator is useful either in cases where either the input is expected to have exactly one bit set and one will want the result to do so as well, or in cases where one will be using a loop to output all the bits in a word and wants it to terminate cleanly. For example:
void processBitsLsbFirst(int n, BitProcessor whatever)
{
while(n != 0)
{
whatever.processBit(n & 1);
n >>>= 1;
}
}
If the code were to use a signed right-shift operation and were passed a negative value, it would output 1's indefinitely. With the unsigned-right-shift operator, however, the most significant bit ends up being interpreted just like any other.
The unsigned right-shift operator may also be useful when a computation would, arithmetically, yield a positive number between 0 and 4,294,967,295 and one wishes to divide that number by a power of two. For example, when computing the sum of two int values which are known to be positive, one may use (n1+n2)>>>1 without having to promote the operands to long. Also, if one wishes to divide a positive int value by something like pi without using floating-point math, one may compute ((value*5468522205L) >>> 34) [(1L<<34)/pi is 5468522204.61, which rounded up yields 5468522205]. For dividends over 1686629712, the computation of value*5468522205L would yield a "negative" value, but since the arithmetically-correct value is known to be positive, using the unsigned right-shift would allow the correct positive number to be used.
A normal right shift >> of a negative number will keep it negative. I.e. the sign bit will be retained.
An unsigned right shift >>> will shift the sign bit too, replacing it with a zero bit.
There is no need to have the equivalent left shift because there is only one sign bit and it is the leftmost bit so it only interferes when shifting right.
Essentially, the difference is that one preserves the sign bit, the other shifts in zeros to replace the sign bit.
For positive numbers they act identically.
For an example of using both >> and >>> see BigInteger shiftRight.
In the Java domain most typical applications the way to avoid overflows is to use casting or Big Integer, such as int to long in the previous examples.
int hiint = 2147483647;
System.out.println("mean hiint+hiint/2 = " + ( (((long)hiint+(long)hiint)))/2);
System.out.println("mean hiint*2/2 = " + ( (((long)hiint*(long)2)))/2);
BigInteger bhiint = BigInteger.valueOf(2147483647);
System.out.println("mean bhiint+bhiint/2 = " + (bhiint.add(bhiint).divide(BigInteger.valueOf(2))));
I would like to know which one is the best way to work with binary numbers in java.
I need a way to create an array of binary numbers and do some calculations with them.
For example, I would like to X-or the values or multiply matrix of binary numbers.
Problem solved:
Thanks very much for all the info.
I think for my case I'm going to use the BitSet mentioned by #Jarrod Roberson
In Java edition 7, you can simply use binary numbers by declaring ints and preceding your numbers with 0b or 0B:
int x=0b101;
int y=0b110;
int z=x+y;
System.out.println(x + "+" + y + "=" + z);
//5+6=11
/*
* If you want to output in binary format, use Integer.toBinaryString()
*/
System.out.println(Integer.toBinaryString(x) + "+" + Integer.toBinaryString(y)
+ "=" + Integer.toBinaryString(z));
//101+110=1011
What you are probably looking for is the BitSet class.
This class implements a vector of bits that grows as needed. Each
component of the bit set has a boolean value. The bits of a BitSet are
indexed by nonnegative integers. Individual indexed bits can be
examined, set, or cleared. One BitSet may be used to modify the
contents of another BitSet through logical AND, logical inclusive OR,
and logical exclusive OR operations.
By default, all bits in the set initially have the value false.
Every bit set has a current size, which is the number of bits of space
currently in use by the bit set. Note that the size is related to the
implementation of a bit set, so it may change with implementation. The
length of a bit set relates to logical length of a bit set and is
defined independently of implementation.
Unless otherwise noted, passing a null parameter to any of the methods
in a BitSet will result in a NullPointerException.
There's a difference between the number itself and
it's representation in the language. For instance, "0xD" (radix 16), "13" (radix 10), "015" (radix 8) and "b1101" (radix 2) are four
different representations referring to the same number.
That said, you can use the "int" primitive data type in the Java language to represent any binary number (as well as any number in any radix), but only in Java 7 you are able to use a binary literal as you were previously able to use the octal (0) and hexa (0x) literals to represent those numbers, if I understood correctly your question.
You can store them as byte arrays, then access the bits individually. Then to XOR them you can merely XOR the bytes (it is a bitwise operation).
Of course it doesn't have to be a byte array (could be an array of int types or whatever you want), since everything is stored in binary in the end.
I've never seen a computer that uses anything but binary numbers.
The XOR operator in Java is ^. For example, 5 ^ 3 = 6. The default radix for most number-to-string conversions is 10, but there are several methods which allow you to specify another base, like 2:
System.out.println(Integer.toString(5 ^ 3, 2));
If you are using Java 7, you can use binary literals in your source code (in addition to the decimal, hexadecimal, and octal forms previously supported).
Is there a way in Java to use unsigned numbers like in (My)SQL?
For example: I want to use an 8-bit variable (byte) with a range like: 0 ... 256; instead of -128 ... 127.
No, Java doesn't have any unsigned primitive types apart from char (which has values 0-65535, effectively). It's a pain (particularly for byte), but that's the way it is.
Usually you either stick with the same size, and overflow into negatives for the "high" numbers, or use the wider type (e.g. short for byte) and cope with the extra memory requirements.
You can use a class to simulate an unsigned number. For example
public class UInt8 implements Comparable<UInt8>,Serializable
{
public static final short MAX_VALUE=255;
public static final short MIN_VALUE=0;
private short storage;//internal storage in a int 16
public UInt8(short value)
{
if(value<MIN_VALUE || value>MAX_VALUE) throw new IllegalArgumentException();
this.storage=value;
}
public byte toByte()
{
//play with the shift operator ! <<
}
//etc...
}
You can mostly use signed numbers as if they were unsigned. Most operations stay the same, some need to be modified. See this post.
Internally, you shouldn't be using the smaller values--just use int. As I understand it, using smaller units does nothing but slow things down. It doesn't save memory because internally Java uses the system's word size for all storage (it won't pack words).
However if you use a smaller size storage unit, it has to mask them or range check or something for every operation.
ever notice that char (any operation) char yields an int? They just really don't expect you to use these other types.
The exceptions are arrays (which I believe will get packed) and I/O where you might find using a smaller type useful... but masking will work as well.
Nope, you can't change that. If you need something larger than 127 choose something larger than a byte.
If you need to optimize your storage (e.g. large matrix) you can u can code bigger positive numbers with negatives numbers, so to save space. Then, you have to shift the number value to get the actual value when needed. For instance, I want to manipulate short positive numbers only. Here how this is possible in Java:
short n = 32767;
n = (short) (n + 10);
System.out.println(n);
int m = (int) (n>=0?n:n+65536);
System.out.println(m);
So when a short integer exceeds range, it becomes negative. Yet, at least you can store this number in 16 bits, and restore its correct value by adding shift value (number of different values that can be coded). The value should be restored in a larger type (int in our case). This may not be very convenient, but I find it's so in my case.
I'm quite new to Java and to programming.
Yet, I encountered the same situation recently the need of unsigned values.
It took me around two weeks to code everything I had in mind, but I'm a total noob, so you could spend much less.
The general idea is to create interface, I have named it: UnsignedNumber<Base, Shifted> and to extend Number.class whilst implementing an abstract AbstractUnsigned<Base, Shifted, Impl extends AbstractUnsigned<Base, Shifted, Impl>> class.
So, Base parameterized type represents the base type, Shifted represents actual Java type. Impl is a shortcut for Implementation of this abstract class.
Most of the time consumed boilerplate of Java 8 Lambdas and internal private classes and safety procedures. The important thing was to achieve the behavior of unsigned when mathematical operation like subtraction or negative addition spawns the zero limit: to overflow the upper signed limit backwards.
Finally, it took another couple of days to code factories and implementation sub classes.
So far I have know:
UByte and MUByte
UShort and MUShort
UInt and MUInt
... Etc.
They are descendants of AbstractUnsigned:
UByte or MUByte extend AbstractUnsigned<Byte, Short, UByte> or AbstractUnsigned<Byte, Short, MUByte>
UShort or MUShort extend AbstractUnsigned<Short, Integer, UShort> or AbstractUnsigned<Short, Integer, MUShort>
...etc.
The general idea is to take unsigned upper limit as shifted (casted) type and code transposition of negative values as they were to come not from zero, but the unsigned upper limit.
UPDATE:
(Thanks to Ajeans kind and polite directions)
/**
* Adds value to the current number and returns either
* new or this {#linkplain UnsignedNumber} instance based on
* {#linkplain #isImmutable()}
*
* #param value value to add to the current value
* #return new or same instance
* #see #isImmutable()
*/
public Impl plus(N value) {
return updater(number.plus(convert(value)));
}
This is an externally accessible method of AbstractUnsigned<N, Shifted, Impl> (or as it was said before AbstractUnsigned<Base, Shifted, Impl>);
Now, to the under-the-hood work:
private Impl updater(Shifted invalidated){
if(mutable){
number.setShifted(invalidated);
return caster.apply(this);
} else {
return shiftedConstructor.apply(invalidated);
}
}
In the above private method mutable is a private final boolean of an AbstractUnsigned. number is one of the internal private classes which takes care of transforming Base to Shifted and vice versa.
What matters in correspondence with previous 'what I did last summer part'
is two internal objects: caster and shiftedConstructor:
final private Function<UnsignedNumber<N, Shifted>, Impl> caster;
final private Function<Shifted, Impl> shiftedConstructor;
These are the parameterized functions to cast N (or Base) to Shifted or to create a new Impl instance if current implementation instance of the AbstractUnsigned<> is immutable.
Shifted plus(Shifted value){
return spawnBelowZero.apply(summing.apply(shifted, value));
}
In this fragment is shown the adding method of the number object. The idea was to always use Shifted internally, because it is uncertain when the positive limits of 'original' type will be spawned. shifted is an internal parameterized field which bears the value of the whole AbstractUnsigned<>. The other two Function<> derivative objects are given below:
final private BinaryOperator<Shifted> summing;
final private UnaryOperator<Shifted> spawnBelowZero;
The former performs addition of two Shifted values. And the latter performs spawning below zero transposition.
And now an example from one of the factory boilerplates 'hell' for AbstractUnsigned<Byte, Short> specifically for the mentioned before spawnBelowZero UnaryOperator<Shifted>:
...,
v-> v >= 0
? v
: (short) (Math.abs(Byte.MIN_VALUE) + Byte.MAX_VALUE + 2 + v),
...
if Shifted v is positive nothing really happens and the original value is being returned. Otherwise: there's a need to calculate the upper limit of the Base type which is Byte and add up to that value negative v. If, let's say, v == -8 then Math.abs(Byte.MIN_VALUE) will produce 128 and Byte.MAX_VALUE will produce 127 which gives 255 + 1 to get the original upper limit which was cut of by the sign bit, as I got that, and the so desirable 256 is in the place. But the very first negative value is actually that 256 that's why +1 again or +2 in total. Finally, 255 + 2 + v which is -8 gives 255 + 2 + (-8) and 249
Or in a more visual way:
0 1 2 3 ... 245 246 247 248 249 250 251 252 253 254 255 256
-8 -7 -6 -5 -4 -3 -2 -1
And to finalize all that: this definitely does not ease your work or saves memory bytes, but you have a pretty much desirable behaviour when it is needed. And you can use that behaviour pretty much with any other Number.class subclasses. AbstractUnsigned being subclass of Number.class itself provides all the convenience methods and constants
similar to other 'native' Number.class subclasses, including MIN_VALUE and MAX_VALUE and a lot more, for example, I coded convenience method for mutable subclasses called makeDivisibileBy(Number n) which performs the simplest operation of value - (value % n).
My initial endeavour here was to show that even a noob, such as I am, can code it. My initial endeavour when I was coding that class was to get conveniently versatile tool for constant using.