The limit of int is from -2147483648 to 2147483647.
If I input
int i = 2147483648;
then Eclipse will prompt a red underline under "2147483648".
But if I do this:
int i = 1024 * 1024 * 1024 * 1024;
it will compile fine.
public class Test {
public static void main(String[] args) {
int i = 2147483648; // error
int j = 1024 * 1024 * 1024 * 1024; // no error
}
}
Maybe it's a basic question in Java, but I have no idea why the second variant produces no error.
There's nothing wrong with that statement; you're just multiplying 4 numbers and assigning it to an int, there just happens to be an overflow. This is different than assigning a single literal, which would be bounds-checked at compile-time.
It is the out-of-bounds literal that causes the error, not the assignment:
System.out.println(2147483648); // error
System.out.println(2147483647 + 1); // no error
By contrast a long literal would compile fine:
System.out.println(2147483648L); // no error
Note that, in fact, the result is still computed at compile-time because 1024 * 1024 * 1024 * 1024 is a constant expression:
int i = 1024 * 1024 * 1024 * 1024;
becomes:
0: iconst_0
1: istore_1
Notice that the result (0) is simply loaded and stored, and no multiplication takes place.
From JLS §3.10.1 (thanks to #ChrisK for bringing it up in the comments):
It is a compile-time error if a decimal literal of type int is larger than 2147483648 (231), or if the decimal literal 2147483648 appears anywhere other than as the operand of the unary minus operator (§15.15.4).
1024 * 1024 * 1024 * 1024 and 2147483648 do not have the same value in Java.
Actually, 2147483648 ISN'T EVEN A VALUE(although 2147483648L is) in Java. The compiler literally does not know what it is, or how to use it. So it whines.
1024 is a valid int in Java, and a valid int multiplied by another valid int, is always a valid int. Even if it's not the same value that you would intuitively expect because the calculation will overflow.
Example
Consider the following code sample:
public static void main(String[] args) {
int a = 1024;
int b = a * a * a * a;
}
Would you expect this to generate a compile error? It becomes a little more slippery now.
What if we put a loop with 3 iterations and multiplied in the loop?
The compiler is allowed to optimize, but it can't change the behaviour of the program while it's doing so.
Some info on how this case is actually handled:
In Java and many other languages, integers will consist of a fixed number of bits. Calculations that don't fit in the given number of bits will overflow; the calculation is basically performed modulus 2^32 in Java, after which the value is converted back into a signed integer.
Other languages or API's use a dynamic number of bits (BigInteger in Java), raise an exception or set the value to a magic value such as not-a-number.
I have no idea why the second variant produces no error.
The behaviour that you suggest -- that is, the production of diagnostic message when a computation produces a value that is larger than the largest value that can be stored in an integer -- is a feature. For you to use any feature, the feature must be thought of, considered to be a good idea, designed, specified, implemented, tested, documented and shipped to users.
For Java, one or more of the things on that list did not happen, and therefore you don't have the feature. I don't know which one; you'd have to ask a Java designer.
For C#, all of those things did happen -- about fourteen years ago now -- and so the corresponding program in C# has produced an error since C# 1.0.
In addition to arshajii's answer I want to show one more thing:
It is not the assignment that causes the error but simply the use of the literal.
When you try
long i = 2147483648;
you'll notice it also causes a compile-error since the right hand side still is an int-literal and out of range.
So operations with int-values (and that's including assignments) may overflow without a compile-error (and without a runtime-error as well), but the compiler just can't handle those too-large literals.
A: Because it is not an error.
Background: The multiplication 1024 * 1024 * 1024 * 1024 will lead to an overflow. An overflow is very often a bug. Different programming languages produce different behavior when overflows happen. For example, C and C++ call it "undefined behavior" for signed integers, and the behavior is defined unsigned integers (take the mathematical result, add UINT_MAX + 1 as long as the result is negative, subtract UINT_MAX + 1 as long as the result is greater than UINT_MAX).
In the case of Java, if the result of an operation with int values is not in the allowed range, conceptually Java adds or subtracts 2^32 until the result is in the allowed range. So the statement is completely legal and not in error. It just doesn't produce the result that you may have hoped for.
You can surely argue whether this behavior is helpful, and whether the compiler should give you a warning. I'd say personally that a warning would be very useful, but an error would be incorrect since it is legal Java.
Related
This question already has answers here:
Why is the default type of Java integer literals int instead of long? [closed]
(7 answers)
Closed 6 months ago.
I know that max value for int and long are very high but somehow I am not able to handle them in my code. I am getting compilation error in all the scenarios below. Could someone please suggest how I can handle 20118998631 value.
I know that if I put l after this value 20118998631l then declaring it as long will work but problem is that I am getting this from a network call and if I declare my field as long and value will come simply as 20118998631 then it will break.
int x = 20118998631; // compilation error
long l = 20118998631; // compilation error
double d = 20118998631; // compilation error
Long l1 = new Long(20118998631); // compilation error
I know that max value for int and long are very high
The definition of 'very' is in the eye of the beholder, isn't it?
The max value of an int is Integer.MAX_VALUE, which is 2147483647. Specifically, that's 2, to the 31st power, minus 1. Because computers use bits, int uses 32 bits, and about half of all the numbers an int can represent are used to represent negative numbers, hence why you end up with 2^31-1 as max int.
For longs, its 2^63-1 for the same reason: long uses 64 bit, and half of all representable values are negative.
If you have numbers that are larger than this, you need to use BigInteger (a class in the standard library for integers of arbitrary size) or byte[] or something else.
but problem is that I am getting this from a network call and if I declare my field as long and value will come simply as 20118998631 then it will break.
This doesn't make sense. Are you getting stuff from the network, shoving what you get into a file with a prefix and suffix, and then that file is something you compile with javac? That sounds bonkers, but if you are, just.. add that L. Or, add " before and after the number and pass that to new BigInteger instead, now the number can be thousands of digits large if you want it to be.
Otherwise, you're getting a bytes which either represent the number directly (and the L aspect is not relevant to this conversation), or you're getting a string in, which again, isn't relevant to this conversation: That L is just for writing literal numbers inside .java files, it doesn't come up anywhere else. Turning a string containing digits, such as "20118998631" into e.g. a long is done with Long.parseLong("20118998631") which works fine, and does not require the L (in fact, it won't work if you include it).
As people have already mentioned in the comments, we need more details about your networking in order to answer the question in full.
In general:
Your respons will be in form of a string (or byte array to be exact) and you will have to convert this to your representation.
You will have to use a function to convert the string to your desired representation, for example, Long.valueOf("20118998631") would do it.
But depending on the setup in use you might need to configure your networking to interpret the incoming number as long.
In general, most developers tend to use int for most things, so you might have code in your networking that tries to convert all numbers to int, which will not work with numbers larger than 2147483647, no matter what. The StackTrace should help in this case.
For the examples provided in your question:
int x = 20118998631;
/*
compilation error, the number doesn't fit within an integer and never will.
Therefore expected.
*/
long l = 20118998631;
/*
compilation error, you assign to the variable l the integer constant 20118998631;
this will not work as the given integer is larger than max int.
Use 20118998631L (not the L) to use a long
*/
double d = 20118998631;
/*
compilation error, same as with long above, use a capital D to interpret the number as double
*/
Long l1 = new Long(20118998631);
/*
compilation error
Same as with long above, you declare an integer constant larger than max int and box it within long.
Add a trailing L for long and preferably remove the boxing (java will auto-box if necessary)
*/
So 20118998631L = 0x4_AF2F_8E67L which does not fit in a java int of 4 bytes, only in a long.
The REST API defines an INT and indeed in an SQL INT this might fit.
Getting the "INT" as String s you must do Long.parseLong(s).
Getting the "INT" as bytes, then 04 might be a size (4 bytes), of 2939129447.
This question already has answers here:
Why don't Java's +=, -=, *=, /= compound assignment operators require casting?
(11 answers)
Closed 5 years ago.
I know that the compiler does implicit type conversion for integer literals.
For example:
byte b = 2; // implicit type conversion, same as byte b = (byte)2;
The compiler gives me an error if the range overflows:
byte b = 150; // error, it says cannot convert from int to byte
The compiler gives the same error when the variable is passed an expression:
byte a = 3;
byte b = 5;
byte c = 2 + 7; // compiles fine
byte d = 1 + b; // error, it says cannot convert from int to byte
byte e = a + b; // error, it says cannot convert from int to byte
I came to the conclusion that the result of an expression that involves variables cannot be guaranteed. The resulting value can be within or outside the byte range so compiler throws off an error.
What puzzles me is that the compiler does not throw an error when I put it like this:
byte a = 127;
byte b = 5;
byte z = (a+=b); // no error, why ?
Why does it not give me an error?
While decompiling your code will explain what Java is doing, the reason why it's doing it can be generally found in the language specification. But before we go into that, we have to establish a few important concepts:
A literal numeral is always interepreted as an int.
An integer literal is of type long if it is suffixed with an ASCII letter L or l (ell); otherwise it is of type int (§4.2.1).
A byte can only hold an integer value between -128 and 127, inclusive.
An attempt to assign a literal that is larger than the type that can hold it will result in a compilation error. This is the first scenario you're encountering.
So we're back to this scenario: why would adding two bytes that are clearly more than what a byte can handle not produce a compilation error?
It won't raise a run-time exception because of overflow.
This is the scenario in which two numbers added together suddenly produce a very small number. Due to the small size of byte's range, it's extremely easy to overflow; for example, adding 1 to 127 would do it, resulting in -128.
The chief reason it's going to wrap around is due to the way Java handles primitive value conversion; in this case, we're talking about a narrowing conversion. That is to say, even though the sum produced is larger than byte, the narrowing conversion will cause information to be discarded to allow the data to fit into a byte, as this conversion never causes a run-time exception.
To break down your scenario step by step:
Java adds a = 127 and b = 5 together to produce 132.
Java understands that a and b are of type byte, so the result must also be of type byte.
The integer result of this is still 132, but at this point, Java will perform a cast to narrow the result to within a byte - effectively giving you (byte)(a += b).
Now, both a and z contain the result -124 due to the wrap-around.
The answer is provided by JLS 15.26.2:
For example, the following code is correct:
short x = 3;
x += 4.6;
and results in x having the value 7 because it is equivalent to:
short x = 3;
x = (short)(x + 4.6);
So, as you can see, the latest case actually work because the addition assignment (as any other operator assignment) performs an implicit cast to the left hand type (and in your case a is a byte). Extending, it is equivalent to byte e = (byte)(a + b);, which will compile happily.
I came to the conclusion that the result of an expression that involves variables cannot be guaranteed. The resulting value can be within or outside the byte range so compiler throws off an error.
No, that's not the reason. The compilers of a staticly-typed language work in this way: Any variable must be declared and typed, so even if its value is not known at compile-time, its type is known. The same goes for implicit constants. Based upon this fact, the rules to compute scales are basically these:
Any variable must have the same or higher scale than the expression at its right side.
Any expression has the same scale of the maximum term involved on it.
An explicit cast forces, of corse, the scale of the right-side expression.
(These are in fact a simplified view; actually might be a little more complex).
Apply it to your cases:
byte d = 1 + b
The actual scales are:
byte = int + byte
... (because 1 is considered as an implicit int constant). So, applying the first rule, the variable must have at least int scale.
And in this case:
byte z = (a+=b);
The actual scales are:
byte = byte += byte
... which is OK.
Update
Then, why byte e = a + b produce a compile-time error?
As I said, the actual type rules in java are more complex: While the general rules apply to all types, the primitive byte and short types are more restricted: The compiler assumes that adding/substracting two or more bytes/shorts is risking to cause an overflow (as #Makoto stated), so it requires to be stored as the next type in scale considered "safer": an int.
The basic reason is that the compiler behaves a little differently when constants are involved. All integer literals are treated as int constants (unless they have an L or l at the end). Normally, you can't assign an int to a byte. However, there's a special rule where constants are involved; see JLS 5.2. Basically, in a declaration like byte b = 5;, 5 is an int, but it's legal to do the "narrowing" conversion to byte because 5 is a constant and because it fits into the range of byte. That's why byte b = 5 is allowed and byte b = 130 is not.
However, byte z = (a += b); is a different case. a += b just adds b to a, and returns the new value of a; that value is assigned to a. Since a is a byte, there is no narrowing conversion involved--you're assigning a byte to a byte. (If a were an int, the program would always be illegal.)
And the rules say that a + b (and therefore a = a + b, or a += b) won't overflow. If the result, at runtime, is too large for a byte, the upper bits just get lost--the value wraps around. Also, the compiler will not "value follow" to notice that a + b would be larger than 127; even though we can tell that the value will be larger than 127, the compiler won't keep track of the previous values. As far as it knows, when it sees a += b, it only knows that the program will add b to a when it runs, and it doesn't look at previous declarations to see what the values will be. (A good optimizing compiler might actually do that kind of work. But we're talking about what makes a program legal or not, and the rules about legality don't concern themselves with optimization.)
I have encountered this before in one project and this is what I learned:
unlike c/c++, Java is always use signed primitives. One byte is from -128 to +127 so if you assign anything behind this range it will give you compile error.
If you explicitly convert to byte like (byte) 150 still you won't get what you want (you can check with debugger and see it will convert to something else).
When you use variables like x = a + b because the compiler doesn't know the values at run time and cannot calculate whether -128 <= a+b <= +127 it will give error.
Regarding your question about why compiler doesn't give error on something like a+=b :
I dig into java compiler available from openjdk at
http://hg.openjdk.java.net/jdk9/jdk9/langtools.
I traced the tree processing of operands and came to an interesting expression in one of the compiler files Lower.java which partially responsible for traversing the compiler tree. here is a part of the code that would be interesting (Assignop is for all of the operands like += -= /= ...)
public void visitAssignop(final JCAssignOp tree) {
...
Symbol newOperator = operators.resolveBinary(tree,
newTag,
tree.type,
tree.rhs.type);
JCExpression expr = lhs;
//Interesting part:
if (expr.type != tree.type)
expr = make.TypeCast(tree.type, expr);
JCBinary opResult = make.Binary(newTag, expr, tree.rhs);
opResult.operator = newOperator;:
....
as you can see if the rhs has different type than the lhs, the type cast would take place so even if you declare float or double on the right hand side (a+=2.55) you will get no error because of the type cast.
/*
* Decompiled Result with CFR 0_110.
*/
class Test {
Test() {
}
public static /* varargs */ void main(String ... arrstring) {
int n = 127;
int n2 = 5;
byte by = (byte)(n + n2);
n = by;
byte by2 = by;
}
}
After decompilation of your Code
class Test{
public static void main(String... args){
byte a = 127;
byte b = 5;
byte z = (a+=b); // no error, why ?
}
}
Internally, Java replaced your a+=b operator with (byte)(n+n2) the code.
The expression byte1+byte2 is equivalent to (int)byte1+(int)byte2, and has type int. While the expression x+=y; would generally be equivalent to var1=var1+var2;, such an interpretation would make it impossible to use += with values smaller than int, so the compiler will treat byte1+=byte2 as byte1=(byte)(byte1+byte2);.
Note that Java's type system was designed first and foremost for simplicity, and its rules were chosen to as to make sense in many cases, but because making the rules simple was more important than making them consistently sensible, there are many cases where the type system rules yield nonsensical behavior. One of the more interesting ones is illustrated via:
long l1 = Math.round(16777217L)
long l2 = Math.round(10000000000L)
In the real world, one wouldn't try to round long constants, of course, but the situation might arise if something like:
long distInTicks = Math.round(getDistance() * 2.54);
were changed to eliminate the scale factor [and getDistance() returned long].
What values would you expect l1 and l2 should receive? Can you figure out why they might receive some other value?
I am trying to write a bitwise calculator in java, something that you could input an expression such as ~101 and it would give back 10 however when i run this code
import java.util.Scanner;
public class Test
{
public static void main(String[] args)
{
Integer a = Integer.valueOf("101", 2);
System.out.println(Integer.toString(~a,2));
}
}
it outputs -110 why?
You are assuming that 101 is three bits long. Java doesn't support variable length bit operations, it operates on a whole int of bits, so ~ will be the not of a 32 bit long "101".
--- Edited after being asked "How can I fix this?" ---
That's a really good question, but the answer is a mix of "you can't" and "you can achieve the same thing by different means".
You can't fix the ~ operator, as it does what it does. It would sort of be like asking to fix + to only add the 1's place. Just not going to happen.
You can achieve the desired operation, but you need a bit more "stuff" to get it going. First you must have something (another int) that specifies the bits of interest. This is typically called a bit mask.
int mask = 0x00000007; // just the last 3 bits.
int masked_inverse = (~value) & mask;
Note that what we did was really invert 32 bits, then zeroed out 29 of those bits; because, they were set to zero in the mask, which means "we don't care about them". This can also be imagined as leveraging the & operator such that we say "if set and we care about it, set it".
Now you will still have 32 bits, but only the lower 3 will be inverted. If you want a 3 bit data structure, then that's a different story. Java (and most languages) just don't support such things directly. So, you might be tempted to add another type to Java to support that. Java adds types via a class mechanism, but the built-in types are not changeable. This means you could write a class to represent a 3 bit data structure, but it will have to handle ints internally as 32 bit fields.
Fortunately for you, someone has already done this. It is part of the standard Java library, and is called a BitSet.
BitSet threeBits = new BitSet(3);
threeBits.set(2); // set bit index 2
threeBits.set(0); // set bit index 0
threeBits.flip(0,3);
However, such bit manipulations have a different feel to them due to the constraints of the Class / Object system in Java, which follows from defining classes as the only way to add new types in Java.
If a = ...0000101 (bin) = 5 (dec)
~a = ~...0000101(bin) = ...1111010(bin)
and Java uses "Two's complement" form to represent negative numbers so
~a = -6 (dec)
Now difference between Integer.toBinaryString(number) and Integer.toString(number, 2) for negative number is that
toBinaryString returns String in "Two's complement" form but
toString(number, 2) calculates binary form as if number was positive and add "minus" mark if argument was negative.
So toString(number, 2) for ~a = -6 will
calculate binary value for 6 -> 0000110,
trim leading zeros -> 110,
add minus mark -> -110.
101 in integer is actually represented as 00000000000000000000000000000101 negate this and you get 11111111111111111111111111111010 - this is -6.
The toString() method interprets its argument as a signed value.
To demonstrate binary operations its better to use Integer.toBinaryString(). It interprets its argument as unsigned, so that ~101 is output as 11111111111111111111111111111010.
If you want fewer bits of output you can mask the result with &.
Just to elaborate on Edwin's answer a bit - if you're looking to create a variable length mask to develop the bits of interest, you might want some helper functions:
/**
* Negate a number, specifying the bits of interest.
*
* Negating 52 with an interest of 6 would result in 11 (from 110100 to 001011).
* Negating 0 with an interest of 32 would result in -1 (equivalent to ~0).
*
* #param number the number to negate.
* #param bitsOfInterest the bits we're interested in limiting ourself to (32 maximum).
* #return the negated number.
*/
public int negate(int number, int bitsOfInterest) {
int negated = ~number;
int mask = ~0 >>> (32 - bitsOfInterest);
logger.info("Mask for negation is [" + Integer.toBinaryString(mask) + "]");
return negated & mask;
}
/**
* Negate a number, assuming we're interesting in negation of all 31 bits (exluding the sign).
*
* Negating 32 in this case would result in ({#link Integer#MAX_VALUE} - 32).
*
* #param number the number to negate.
* #return the negated number.
*/
public int negate(int number) {
return negate(number, 31);
}
Why does the below code prints 2147483647, the actual value being 2147483648?
i = (int)Math.pow(2,31) ;
System.out.println(i);
I understand that the max positive value that a int can hold is 2147483647. Then why does a code like this auto wraps to the negative side and prints -2147483648?
i = (int)Math.pow(2,31) +1 ;
System.out.println(i);
i is of type Integer. If the second code sample (addition of two integers) can wrap to the negative side if the result goes out of the positive range,why can't the first sample wrap?
Also ,
i = 2147483648 +1 ;
System.out.println(i);
which is very similar to the second code sample throws compile error saying the first literal is out of integer range?
My question is , as per the second code sample why can't the first and third sample auto wrap to the other side?
For the first code sample, the result is narrowed from a double to an int. the JLS 5.1.3 describes how narrowing conversions for doubles to ints are performed.
The relevant part is:
The value must be too large (a
positive value of large magnitude or
positive infinity), and the result of
the first step is the largest
representable value of type int or
long.
This is why 2^31 (2147483648) is reduced to Integer.MAX_VALUE (2147483647). The same is true for
i = (int)(Math.pow(2,31)+100.0) ; // addition note the parentheses
and
i = (int)10000000000.0d; // == 2147483647
When the addition is done without parentheses, as in your second example, we are then dealing with integer addition. Integral types use 2's complement to represent values. Under this scheme adding 1 to
0x7FFFFFFF (2147483647)
gives
0x80000000
Which is 2's complement for -2147483648. Some languages perform overflow checking for arithmetic operations (e.g. Ada will throw an exception). Java, with it's C heritage does not check for overflow. CPUs typically set an overflow flag when an arithmetic operation overflows or underflows. Language runtimes can check this flag, although this introduces additional overhead, which some feel is unnecessary.
The third example doesn't compile since the compiler checks literal values against the range of their type, and gives a compiler error for values out of range. See JLS 3.10.1 - Integer Literals.
Then why does a code like this auto wraps to the negative side and prints -2147483648?
This is called overflow. Java does it because C does it. C does it because most processors do it. In some languages this does not happen. For example some languages will throw an exception, in others the type will change to something that can hold the result.
My question is , as per the second code sample why can't the first and third sample auto wrap to the other side?
Regarding the first program: Math.pow returns a double and does not overflow. When the double is converted to an integer it is truncated.
Regarding your third program: Overflow is rarely a desirable property and is often a sign that your program is no longer working. If the compiler can see that it gets an overflow just from evaluating a constant that is almost certainly an error in the code. If you wanted a large negative number, why would you write a large positive one?
Is there a way in Java to use unsigned numbers like in (My)SQL?
For example: I want to use an 8-bit variable (byte) with a range like: 0 ... 256; instead of -128 ... 127.
No, Java doesn't have any unsigned primitive types apart from char (which has values 0-65535, effectively). It's a pain (particularly for byte), but that's the way it is.
Usually you either stick with the same size, and overflow into negatives for the "high" numbers, or use the wider type (e.g. short for byte) and cope with the extra memory requirements.
You can use a class to simulate an unsigned number. For example
public class UInt8 implements Comparable<UInt8>,Serializable
{
public static final short MAX_VALUE=255;
public static final short MIN_VALUE=0;
private short storage;//internal storage in a int 16
public UInt8(short value)
{
if(value<MIN_VALUE || value>MAX_VALUE) throw new IllegalArgumentException();
this.storage=value;
}
public byte toByte()
{
//play with the shift operator ! <<
}
//etc...
}
You can mostly use signed numbers as if they were unsigned. Most operations stay the same, some need to be modified. See this post.
Internally, you shouldn't be using the smaller values--just use int. As I understand it, using smaller units does nothing but slow things down. It doesn't save memory because internally Java uses the system's word size for all storage (it won't pack words).
However if you use a smaller size storage unit, it has to mask them or range check or something for every operation.
ever notice that char (any operation) char yields an int? They just really don't expect you to use these other types.
The exceptions are arrays (which I believe will get packed) and I/O where you might find using a smaller type useful... but masking will work as well.
Nope, you can't change that. If you need something larger than 127 choose something larger than a byte.
If you need to optimize your storage (e.g. large matrix) you can u can code bigger positive numbers with negatives numbers, so to save space. Then, you have to shift the number value to get the actual value when needed. For instance, I want to manipulate short positive numbers only. Here how this is possible in Java:
short n = 32767;
n = (short) (n + 10);
System.out.println(n);
int m = (int) (n>=0?n:n+65536);
System.out.println(m);
So when a short integer exceeds range, it becomes negative. Yet, at least you can store this number in 16 bits, and restore its correct value by adding shift value (number of different values that can be coded). The value should be restored in a larger type (int in our case). This may not be very convenient, but I find it's so in my case.
I'm quite new to Java and to programming.
Yet, I encountered the same situation recently the need of unsigned values.
It took me around two weeks to code everything I had in mind, but I'm a total noob, so you could spend much less.
The general idea is to create interface, I have named it: UnsignedNumber<Base, Shifted> and to extend Number.class whilst implementing an abstract AbstractUnsigned<Base, Shifted, Impl extends AbstractUnsigned<Base, Shifted, Impl>> class.
So, Base parameterized type represents the base type, Shifted represents actual Java type. Impl is a shortcut for Implementation of this abstract class.
Most of the time consumed boilerplate of Java 8 Lambdas and internal private classes and safety procedures. The important thing was to achieve the behavior of unsigned when mathematical operation like subtraction or negative addition spawns the zero limit: to overflow the upper signed limit backwards.
Finally, it took another couple of days to code factories and implementation sub classes.
So far I have know:
UByte and MUByte
UShort and MUShort
UInt and MUInt
... Etc.
They are descendants of AbstractUnsigned:
UByte or MUByte extend AbstractUnsigned<Byte, Short, UByte> or AbstractUnsigned<Byte, Short, MUByte>
UShort or MUShort extend AbstractUnsigned<Short, Integer, UShort> or AbstractUnsigned<Short, Integer, MUShort>
...etc.
The general idea is to take unsigned upper limit as shifted (casted) type and code transposition of negative values as they were to come not from zero, but the unsigned upper limit.
UPDATE:
(Thanks to Ajeans kind and polite directions)
/**
* Adds value to the current number and returns either
* new or this {#linkplain UnsignedNumber} instance based on
* {#linkplain #isImmutable()}
*
* #param value value to add to the current value
* #return new or same instance
* #see #isImmutable()
*/
public Impl plus(N value) {
return updater(number.plus(convert(value)));
}
This is an externally accessible method of AbstractUnsigned<N, Shifted, Impl> (or as it was said before AbstractUnsigned<Base, Shifted, Impl>);
Now, to the under-the-hood work:
private Impl updater(Shifted invalidated){
if(mutable){
number.setShifted(invalidated);
return caster.apply(this);
} else {
return shiftedConstructor.apply(invalidated);
}
}
In the above private method mutable is a private final boolean of an AbstractUnsigned. number is one of the internal private classes which takes care of transforming Base to Shifted and vice versa.
What matters in correspondence with previous 'what I did last summer part'
is two internal objects: caster and shiftedConstructor:
final private Function<UnsignedNumber<N, Shifted>, Impl> caster;
final private Function<Shifted, Impl> shiftedConstructor;
These are the parameterized functions to cast N (or Base) to Shifted or to create a new Impl instance if current implementation instance of the AbstractUnsigned<> is immutable.
Shifted plus(Shifted value){
return spawnBelowZero.apply(summing.apply(shifted, value));
}
In this fragment is shown the adding method of the number object. The idea was to always use Shifted internally, because it is uncertain when the positive limits of 'original' type will be spawned. shifted is an internal parameterized field which bears the value of the whole AbstractUnsigned<>. The other two Function<> derivative objects are given below:
final private BinaryOperator<Shifted> summing;
final private UnaryOperator<Shifted> spawnBelowZero;
The former performs addition of two Shifted values. And the latter performs spawning below zero transposition.
And now an example from one of the factory boilerplates 'hell' for AbstractUnsigned<Byte, Short> specifically for the mentioned before spawnBelowZero UnaryOperator<Shifted>:
...,
v-> v >= 0
? v
: (short) (Math.abs(Byte.MIN_VALUE) + Byte.MAX_VALUE + 2 + v),
...
if Shifted v is positive nothing really happens and the original value is being returned. Otherwise: there's a need to calculate the upper limit of the Base type which is Byte and add up to that value negative v. If, let's say, v == -8 then Math.abs(Byte.MIN_VALUE) will produce 128 and Byte.MAX_VALUE will produce 127 which gives 255 + 1 to get the original upper limit which was cut of by the sign bit, as I got that, and the so desirable 256 is in the place. But the very first negative value is actually that 256 that's why +1 again or +2 in total. Finally, 255 + 2 + v which is -8 gives 255 + 2 + (-8) and 249
Or in a more visual way:
0 1 2 3 ... 245 246 247 248 249 250 251 252 253 254 255 256
-8 -7 -6 -5 -4 -3 -2 -1
And to finalize all that: this definitely does not ease your work or saves memory bytes, but you have a pretty much desirable behaviour when it is needed. And you can use that behaviour pretty much with any other Number.class subclasses. AbstractUnsigned being subclass of Number.class itself provides all the convenience methods and constants
similar to other 'native' Number.class subclasses, including MIN_VALUE and MAX_VALUE and a lot more, for example, I coded convenience method for mutable subclasses called makeDivisibileBy(Number n) which performs the simplest operation of value - (value % n).
My initial endeavour here was to show that even a noob, such as I am, can code it. My initial endeavour when I was coding that class was to get conveniently versatile tool for constant using.