Switch to BigInteger if necessary - java

I am reading a text file which contains numbers in the range [1, 10^100]. I am then performing a sequence of arithmetic operations on each number. I would like to use a BigInteger only if the number is out of the int/long range. One approach would be to count how many digits there are in the string and switch to BigInteger if there are too many. Otherwise I'd just use primitive arithmetic as it is faster. Is there a better way?
Is there any reason why Java could not do this automatically i.e. switch to BigInteger if an int was too small? This way we would not have to worry about overflows.

I suspect the decision to use primitive values for integers and reals (done for performance reasons) made that option not possible. Note that Python and Ruby both do what you ask.
In this case it may be more work to handle the smaller special case than it is worth (you need some custom class to handle the two cases), and you should just use BigInteger.

Is there any reason why Java could not do this automatically i.e. switch to BigInteger if an int was too small?
Because that is a higher level programming behavior than what Java currently is. The language is not even aware of the BigInteger class and what it does (i.e. it's not in JLS). It's only aware of Integer (among other things) for boxing and unboxing purposes.
Speaking of boxing/unboxing, an int is a primitive type; BigInteger is a reference type. You can't have a variable that can hold values of both types.

You could read the values into BigIntegers, and then convert them to longs if they're small enough.
private final BigInteger LONG_MAX = BigInteger.valueOf(Long.MAX_VALUE);
private static List<BigInteger> readAndProcess(BufferedReader rd) throws IOException {
List<BigInteger> result = new ArrayList<BigInteger>();
for (String line; (line = rd.readLine()) != null; ) {
BigInteger bignum = new BigInteger(line);
if (bignum.compareTo(LONG_MAX) > 0) // doesn't fit in a long
result.add(bignumCalculation(bignum));
else result.add(BigInteger.valueOf(primitiveCalculation(bignum.longValue())));
}
return result;
}
private BigInteger bignumCalculation(BigInteger value) {
// perform the calculation
}
private long primitiveCalculation(long value) {
// perform the calculation
}
(You could make the return value a List<Number> and have it a mixed collection of BigInteger and Long objects, but that wouldn't look very nice and wouldn't improve performance by a lot.)
The performance may be better if a large amount of the numbers in the file are small enough to fit in a long (depending on the complexity of calculation). There's still risk for overflow depending on what you do in primitiveCalculation, and you've now repeated the code, (at least) doubling the bug potential, so you'll have to decide if the performance gain really is worth it.
If your code is anything like my example, though, you'd probably have more to gain by parallelizing the code so the calculations and the I/O aren't performed on the same thread - you'd have to do some pretty heavy calculations for an architecture like that to be CPU-bound.

The impact of using BigDecimals when something smaller will suffice is surprisingly, err, big: Running the following code
public static class MyLong {
private long l;
public MyLong(long l) { this.l = l; }
public void add(MyLong l2) { l += l2.l; }
}
public static void main(String[] args) throws Exception {
// generate lots of random numbers
long ls[] = new long[100000];
BigDecimal bds[] = new BigDecimal[100000];
MyLong mls[] = new MyLong[100000];
Random r = new Random();
for (int i=0; i<ls.length; i++) {
long n = r.nextLong();
ls[i] = n;
bds[i] = new BigDecimal(n);
mls[i] = new MyLong(n);
}
// time with longs & Bigints
long t0 = System.currentTimeMillis();
for (int j=0; j<1000; j++) for (int i=0; i<ls.length-1; i++) {
ls[i] += ls[i+1];
}
long t1 = Math.max(t0 + 1, System.currentTimeMillis());
for (int j=0; j<1000; j++) for (int i=0; i<ls.length-1; i++) {
bds[i].add(bds[i+1]);
}
long t2 = System.currentTimeMillis();
for (int j=0; j<1000; j++) for (int i=0; i<ls.length-1; i++) {
mls[i].add(mls[i+1]);
}
long t3 = System.currentTimeMillis();
// compare times
t3 -= t2;
t2 -= t1;
t1 -= t0;
DecimalFormat df = new DecimalFormat("0.00");
System.err.println("long: " + t1 + "ms, bigd: " + t2 + "ms, x"
+ df.format(t2*1.0/t1) + " more, mylong: " + t3 + "ms, x"
+ df.format(t3*1.0/t1) + " more");
}
produces, on my system, this output:
long: 375ms, bigd: 6296ms, x16.79 more, mylong: 516ms, x1.38 more
The MyLong class is there only to look at the effects of boxing, to compare against what you would get with a custom BigOrLong class.

Java is Fast--really really Fast. It's only 2-4x slower than c and sometimes as fast or a tad faster where most other languages are 10x (python) to 100x (ruby) slower than C/Java. (Fortran is also hella-fast, by the way)
Part of this is because it doesn't do things like switch number types for you. It could, but currently it can inline an operation like "a*5" in just a few bytes, imagine the hoops it would have to go through if a was an object. It would at least be a dynamic call to a's multiply method which would be a few hundred / thousand times slower than it was when a was simply an integer value.
Java probably could, these days, actually use JIT compiling to optimize the call better and inline it at runtime, but even then very few library calls support BigInteger/BigDecimal so there would be a LOT of native support, it would be a completely new language.
Also imagine how switching from int to BigInteger instead of long would make debugging video games crazy-hard! (Yeah, every time we move to the right side of the screen the game slows down by 50x, the code is all the same! How is this possible?!??)

Would it have been possible? Yes. But there are many problems with it.
Consider, for instance, that Java stores references to BigInteger, which is actually allocated on the heap, but store int literals. The difference can be made clear in C:
int i;
BigInt* bi;
Now, to automatically go from a literal to a reference, one would necessarily have to annotate the literal somehow. For instance, if the highest bit of the int was set, then the other bits could be used as a table lookup of some sort to retrieve the proper reference. That also means you'd get a BigInt** bi whenever it overflowed into that.
Of course, that's the bit usually used for sign, and hardware instructions pretty much depend on it. Worse still, if we do that, then the hardware won't be able to detect overflow and set the flags to indicate it. As a result, each operation would have to be accompanied by some test to see if and overflow has happened or will happen (depending on when it can be detected).
All that would add a lot of overhead to basic integer arithmetic, which would in practice negate any benefits you had to begin with. In other words, it is faster to assume BigInt than it is to try to use int and detect overflow conditions while at the same time juggling with the reference/literal problem.
So, to get any real advantage, one would have to use more space to represent ints. So instead of storing 32 bits in the stack, in the objects, or anywhere else we use them, we store 64 bits, for example, and use the additional 32 bits to control whether we want a reference or a literal. That could work, but there's an obvious problem with it -- space usage. :-) We might see more of it with 64 bits hardware, though.
Now, you might ask why not just 40 bits (32 bits + 1 byte) instead of 64? Basically, on modern hardware it is preferable to store stuff in 32 bits increments for performance reasons, so we'll be padding 40 bits to 64 bits anyway.
EDIT
Let's consider how one could go about doing this in C#. Now, I have no programming experience with C#, so I can't write the code to do it, but I expect I can give an overview.
The idea is to create a struct for it. It should look roughly like this:
public struct MixedInt
{
private int i;
private System.Numeric.BigInteger bi;
public MixedInt(string s)
{
bi = BigInteger.Parse(s);
if (parsed <= int.MaxValue && parsed => int.MinValue)
{
i = (int32) parsed;
bi = 0;
}
}
// Define all required operations
}
So, if the number is in the integer range we use int, otherwise we use BigInteger. The operations have to ensure transition from one to another as required/possible. From the client point of view, this is transparent. It's just one type MixedInt, and the class takes care of using whatever fits better.
Note, however, that this kind of optimization may well be part of C#'s BigInteger already, given it's implementation as a struct.
If Java had something like C#'s struct, we could do something like this in Java as well.

Is there any reason why Java could not
do this automatically i.e. switch to
BigInteger if an int was too small?
This is one of the advantage of dynamic typing, but Java is statically typed and prevents this.
In a dynamically type language when two Integer which are summed together would produce an overflow, the system is free to return, say, a Long. Because dynamically typed language rely on duck typing, it's fine. The same can not happen in a statically typed language; it would break the type system.
EDIT
Given that my answer and comment was not clear, here I try to provide more details why I think that static typing is the main issue:
1) the very fact that we speak of primitive type is a static typing issue; we wouldn't care in a dynamically type language.
2) with primitive types, the result of the overflow can not be converted to another type than an int because it would not be correct w.r.t static typing
int i = Integer.MAX_VALUE + 1; // -2147483648
3) with reference types, it's the same except that we have autoboxing. Still, the addition could not return, say, a BigInteger because it would not match the static type sytem (A BigInteger can not be casted to Integer).
Integer j = new Integer( Integer.MAX_VALUE ) + 1; // -2147483648
4) what could be done is to subclass, say, Number and implement at type UnboundedNumeric that optimizes the representation internally (representation independence).
UnboundedNum k = new UnboundedNum( Integer.MAX_VALUE ).add( 1 ); // 2147483648
Still, it's not really the answer to the original question.
5) with dynamic typing, something like
var d = new Integer( Integer.MAX_VALUE ) + 1; // 2147483648
would return a Long which is ok.

Related

How can one scale an Int64 value without access to a larger type?

I refactored the code in this class to be in a form that is more friendly to my use cases; one issue that I noticed during testing is that I cannot convert this particular equation to use long inputs because assigning to the a and m variables overflow on the multiplication/subtraction steps. Everything is just peachy when using int inputs because they can be casted to a long to prevent overflow. Is there anything one can do to get the proper behavior when the inputs are long?*
public static Func<int, int> Scale(int inputX, int inputY, int outputX, int outputY) {
if (inputX > inputY) {
var z = inputX;
inputX = inputY;
inputY = z;
}
if (outputX > outputY) {
var z = outputX;
outputX = outputY;
outputY = z;
}
var a = (((((double)inputScaleX) * (outputScaleX - outputScaleY)) / ((long)inputScaleY - inputScaleX)) + outputScaleX);
var m = (((double)(outputScaleY - outputScaleX)) / ((long)inputScaleY - inputScaleX));
return (value) => ((int)((value * m) + a));
}
For example, if I replaced every instance of int in the function above with long then result will have an incorrect value in the following code:
Func<long, long> scaler = Scale(long.MinValue, long.MaxValue, -5, 5);
var result = scaler(long.MaxValue - 3);
The expected result is 4 but the actual result of -9223372036854775808 is not only wrong, it ends up being way outside the defined range of [-5, 5].
*Other than straight up using BigInt or implementing 64-bit multiplication and division in software; I am already implementing these operations as workaround and am looking for alternative solutions that I haven't yet come across.
If you have to do math and none of the buildin .NET Integer types is long/large enough, BigInt is the droid you are looking for.
As long as you do not run out of memory, it can take a number of arbitrary size. Do however note that the performance is very bad, as should be expected. After all, Arbtitrary size does not come without tradeoffs. And unlike any other Numeric type, it can grow big.
Looking at the source code, it seems to be little more then a List<uint> with a int for the sign. Unfortunately that designs makes it vulnerable to Fragmentation based OutOfMemory exceptions and related List<uint> growth issues. I was hoping it would use a Linked List internally, but no such luck.
Java has a equivalent type (I asume all higher languages do), but I have no data on it's workings.

Bitwise operation in java or c

I would like to drastically improve the time performance of an operation I would best describe as a bit wise operation.
The following is a constructor for a BitFile class, taking three BitFile as parameters. Whichever bit the first and second parameter (firstContender and secondContender) agree on is taken from firstContender into the BitFile being constructed. Whichever bit they don't agree on is taken from the supportContender.
data is the class-field storing the result and the backbone of the BitFile class.
compare(byte,byte) returns true if both bytes are identical in value.
add(byte,int) takes a byte representing a bit and the index within the bit to extract, a second class-field "index" is used and incremented in add(byte,int) to put the next bit in location.
'BitFile.get(int)' returns a byte with just a specific bit being one, if it is one, BitFile.get(9) would return a byte with value 2 if the second bit of the second byte is a one, otherwise 0.
Xor bit wise operation can quickly tell me which bits are different in the two BitFile. Is there any quick way to use the result of a Xor, where all it's zeroes are represented by the firstContender's equivalent bit and all the one's are represented by the supportContender's equivalent bit, something like a
three operand Bit Wise operator?
public BitFile(
BitFile firstContender,BitFile secondContender,BitFile supportContender)
{
if(firstContender.getLength() != secondContender.getLength())
{
throw new IllegalArgumentException(
"Error.\n"+
"In BitFile constructor.\n"+
"Two BitFiles must have identical lengths.");
}
BitFile randomSet = supportContender;
int length = firstContender.getLength();
data = new byte[length];
for(int i = 0; i < length*8;i++)
{
if(compare(firstContender.get(i),secondContender.get(i)))
{
add(firstContender.get(i),i%8);
}
else
{
add(randomSet.get(i),i%8);
}
}
}
I found this question fairly confusing, but I think what you're computing is like this:
merge(first, second, support) = if first == second then first else support
So just choose where the bit comes from depending on whether the first and second sources agree or not.
something like a three operand Bit Wise operator?
indeed something like that. But of course we need to implement it manually in terms of operations supported by Java. There are two common patterns in bitwise arithmetic to choose between two sources based on a third:
1) (a & ~m) | (b & m)
2) a ^ ((a ^ b) & m)
Which choose, for each bit, the bit from a where m is zero, and from b where m is one. Pattern 1 is easier to understand so I'll use it but it's simple to adapt the code to the second pattern.
As you predicted, the mask in this case will be first ^ second, so:
for (int i = 0; i < data.length; i++) {
int m = first.data[i] ^ second.data[i];
data[i] = (byte)((first.data[i] & ~m) | (support.data[i] & m));
}
The same thing could easily be done with an array of int or long which would need fewer operations to process the same amount of data.

Why doesn't Java have true multidimensional arrays?

The TL;DR version, for those who don't want the background, is the following specific question:
Question
Why doesn't Java have an implementation of true multidimensional arrays? Is there a solid technical reason? What am I missing here?
Background
Java has multidimensional arrays at the syntax level, in that one can declare
int[][] arr = new int[10][10];
but it seems that this is really not what one might have expected. Rather than having the JVM allocate a contiguous block of RAM big enough to store 100 ints, it comes out as an array of arrays of ints: so each layer is a contiguous block of RAM, but the thing as a whole is not. Accessing arr[i][j] is thus rather slow: the JVM has to
find the int[] stored at arr[i];
index this to find the int stored at arr[i][j].
This involves querying an object to go from one layer to the next, which is rather expensive.
Why Java does this
At one level, it's not hard to see why this can't be optimised to a simple scale-and-add lookup even if it were all allocated in one fixed block. The problem is that arr[3] is a reference all of its own, and it can be changed. So although arrays are of fixed size, we could easily write
arr[3] = new int[11];
and now the scale-and-add is screwed because this layer has grown. You'd need to know at runtime whether everything is still the same size as it used to be. In addition, of course, this will then get allocated somewhere else in RAM (it'll have to be, since it's bigger than what it's replacing), so it's not even in the right place for scale-and-add.
What's problematic about it
It seems to me that this is not ideal, and that for two reasons.
For one, it's slow. A test I ran with these methods for summing the contents of a single dimensional or multidimensional array took nearly twice as long (714 seconds vs 371 seconds) for the multidimensional case (an int[1000000] and an int[100][100][100] respectively, filled with random int values, run 1000000 times with warm cache).
public static long sumSingle(int[] arr) {
long total = 0;
for (int i=0; i<arr.length; i++)
total+=arr[i];
return total;
}
public static long sumMulti(int[][][] arr) {
long total = 0;
for (int i=0; i<arr.length; i++)
for (int j=0; j<arr[0].length; j++)
for (int k=0; k<arr[0][0].length; k++)
total+=arr[i][j][k];
return total;
}
Secondly, because it's slow, it thereby encourages obscure coding. If you encounter something performance-critical that would be naturally done with a multidimensional array, you have an incentive to write it as a flat array, even if that makes the unnatural and hard to read. You're left with an unpalatable choice: obscure code or slow code.
What could be done about it
It seems to me that the basic problem could easily enough be fixed. The only reason, as we saw earlier, that it can't be optimised is that the structure might change. But Java already has a mechanism for making references unchangeable: declare them as final.
Now, just declaring it with
final int[][] arr = new int[10][10];
isn't good enough because it's only arr that is final here: arr[3] still isn't, and could be changed, so the structure might still change. But if we had a way of declaring things so that it was final throughout, except at the bottom layer where the int values are stored, then we'd have an entire immutable structure, and it could all be allocated as one block, and indexed with scale-and-add.
How it would look syntactically, I'm not sure (I'm not a language designer). Maybe
final int[final][] arr = new int[10][10];
although admittedly that looks a bit weird. This would mean: final at the top layer; final at the next layer; not final at the bottom layer (else the int values themselves would be immutable).
Finality throughout would enable the JIT compiler to optimise this to give performance to that of a single dimensional array, which would then take away the temptation to code that way just to get round the slowness of multidimensional arrays.
(I hear a rumour that C# does something like this, although I also hear another rumour that the CLR implementation is so bad that it's not worth having... perhaps they're just rumours...)
Question
So why doesn't Java have an implementation of true multidimensional arrays? Is there a solid technical reason? What am I missing here?
Update
A bizarre side note: the difference in timings drops away to only a few percent if you use an int for the running total rather than a long. Why would there be such a small difference with an int, and such a big difference with a long?
Benchmarking code
Code I used for benchmarking, in case anyone wants to try to reproduce these results:
public class Multidimensional {
public static long sumSingle(final int[] arr) {
long total = 0;
for (int i=0; i<arr.length; i++)
total+=arr[i];
return total;
}
public static long sumMulti(final int[][][] arr) {
long total = 0;
for (int i=0; i<arr.length; i++)
for (int j=0; j<arr[0].length; j++)
for (int k=0; k<arr[0][0].length; k++)
total+=arr[i][j][k];
return total;
}
public static void main(String[] args) {
final int iterations = 1000000;
Random r = new Random();
int[] arr = new int[1000000];
for (int i=0; i<arr.length; i++)
arr[i]=r.nextInt();
long total = 0;
System.out.println(sumSingle(arr));
long time = System.nanoTime();
for (int i=0; i<iterations; i++)
total = sumSingle(arr);
time = System.nanoTime()-time;
System.out.printf("Took %d ms for single dimension\n", time/1000000, total);
int[][][] arrMulti = new int[100][100][100];
for (int i=0; i<arrMulti.length; i++)
for (int j=0; j<arrMulti[i].length; j++)
for (int k=0; k<arrMulti[i][j].length; k++)
arrMulti[i][j][k]=r.nextInt();
System.out.println(sumMulti(arrMulti));
time = System.nanoTime();
for (int i=0; i<iterations; i++)
total = sumMulti(arrMulti);
time = System.nanoTime()-time;
System.out.printf("Took %d ms for multi dimension\n", time/1000000, total);
}
}
but it seems that this is really not what one might have expected.
Why?
Consider that the form T[] means "array of type T", then just as we would expect int[] to mean "array of type int", we would expect int[][] to mean "array of type array of type int", because there's no less reason for having int[] as the T than int.
As such, considering that one can have arrays of any type, it follows just from the way [ and ] are used in declaring and initialising arrays (and for that matter, {, } and ,), that without some sort of special rule banning arrays of arrays, we get this sort of use "for free".
Now consider also that there are things we can do with jagged arrays that we can't do otherwise:
We can have "jagged" arrays where different inner arrays are of different sizes.
We can have null arrays within the outer array where appropriate mapping of the data, or perhaps to allow lazy building.
We can deliberately alias within the array so e.g. lookup[1] is the same array as lookup[5]. (This can allow for massive savings with some data-sets, e.g. many Unicode properties can be mapped for the full set of 1,112,064 code points in a small amount of memory because leaf arrays of properties can be repeated for ranges with matching patterns).
Some heap implementations can handle the many smaller objects better than one large object in memory.
There are certainly cases where these sort of multi-dimensional arrays are useful.
Now, the default state of any feature is unspecified and unimplemented. Someone needs to decide to specify and implement a feature, or else it wouldn't exist.
Since, as shown above, the array-of-array sort of multidimensional array will exist unless someone decided to introduce a special banning array-of-array feature. Since arrays of arrays are useful for the reasons above, that would be a strange decision to make.
Conversely, the sort of multidimensional array where an array has a defined rank that can be greater than 1 and so be used with a set of indices rather than a single index, does not follow naturally from what is already defined. Someone would need to:
Decide on the specification for the declaration, initialisation and use would work.
Document it.
Write the actual code to do this.
Test the code to do this.
Handle the bugs, edge-cases, reports of bugs that aren't actually bugs, backward-compatibility issues caused by fixing the bugs.
Also users would have to learn this new feature.
So, it has to be worth it. Some things that would make it worth it would be:
If there was no way of doing the same thing.
If the way of doing the same thing was strange or not well-known.
People would expect it from similar contexts.
Users can't provide similar functionality themselves.
In this case though:
But there is.
Using strides within arrays was already known to C and C++ programmers and Java built on its syntax so that the same techniques are directly applicable
Java's syntax was based on C++, and C++ similarly only has direct support for multidimensional arrays as arrays-of-arrays. (Except when statically allocated, but that's not something that would have an analogy in Java where arrays are objects).
One can easily write a class that wraps an array and details of stride-sizes and allows access via a set of indices.
Really, the question is not "why doesn't Java have true multidimensional arrays"? But "Why should it?"
Of course, the points you made in favour of multidimensional arrays are valid, and some languages do have them for that reason, but the burden is nonetheless to argue a feature in, not argue it out.
(I hear a rumour that C# does something like this, although I also hear another rumour that the CLR implementation is so bad that it's not worth having... perhaps they're just rumours...)
Like many rumours, there's an element of truth here, but it is not the full truth.
.NET arrays can indeed have multiple ranks. This is not the only way in which it is more flexible than Java. Each rank can also have a lower-bound other than zero. As such, you could for example have an array that goes from -3 to 42 or a two dimensional array where one rank goes from -2 to 5 and another from 57 to 100, or whatever.
C# does not give complete access to all of this from its built-in syntax (you need to call Array.CreateInstance() for lower bounds other than zero), but it does for allow you to use the syntax int[,] for a two-dimensional array of int, int[,,] for a three-dimensional array, and so on.
Now, the extra work involved in dealing with lower bounds other than zero adds a performance burden, and yet these cases are relatively uncommon. For that reason single-rank arrays with a lower-bound of 0 are treated as a special case with a more performant implementation. Indeed, they are internally a different sort of structure.
In .NET multi-dimensional arrays with lower bounds of zero are treated as multi-dimensional arrays whose lower bounds just happen to be zero (that is, as an example of the slower case) rather than the faster case being able to handle ranks greater than 1.
Of course, .NET could have had a fast-path case for zero-based multi-dimensional arrays, but then all the reasons for Java not having them apply and the fact that there's already one special case, and special cases suck, and then there would be two special cases and they would suck more. (As it is, one can have some issues with trying to assign a value of one type to a variable of the other type).
Not a single thing above shows clearly that Java couldn't possibly have had the sort of multi-dimensional array you talk of; it would have been a sensible enough decision, but so also the decision that was made was also sensible.
This should be a question to James Gosling, I suppose. The initial design of Java was about OOP and simplicity, not about speed.
If you have a better idea of how multidimensional arrays should work, there are several ways of bringing it to life:
Submit a JDK Enhancement Proposal.
Develop a new JSR through Java Community Process.
Propose a new Project.
UPD. Of course, you are not the first to question the problems of Java arrays design.
For instance, projects Sumatra and Panama would also benefit from true multidimensional arrays.
"Arrays 2.0" is John Rose's talk on this subject at JVM Language Summit 2012.
To me it looks like you sort of answered the question yourself:
... an incentive to write it as a flat array, even if that makes the unnatural and hard to read.
So write it as a flat array which is easy to read. With a trivial helper like
double get(int row, int col) {
return data[rowLength * row + col];
}
and similar setter and possibly a +=-equivalent, you can pretend you're working with a 2D array. It's really no big deal. You can't use the array notation and everything gets verbose and ugly. But that seems to be the Java way. It's exactly the same as with BigInteger or BigDecimal. You can't use braces for accessing a Map, that's a very similar case.
Now the question is how important all those features are? Would more people be happy if they could write x += BigDecimal.valueOf("123456.654321") + 10;, or spouse["Paul"] = "Mary";, or use 2D arrays without the boilerplate, or what? All of this would be nice and you could go further, e.g., array slices. But there's no real problem. You have to choose between verbosity and inefficiency as in many other cases. IMHO, the effort spent on this feature can be better spent elsewhere. Your 2D arrays are a new best as....
Java actually has no 2D primitive arrays, ...
it's mostly a syntactic sugar, the underlying thing is array of objects.
double[][] a = new double[1][1];
Object[] b = a;
As arrays are reified, the current implementation needs hardly any support. Your implementation would open a can of worms:
There are currently 8 primitive types, which means 9 array types, would a 2D array be the tenth? What about 3D?
There is a single special object header type for arrays. A 2D array could need another one.
What about java.lang.reflect.Array? Clone it for 2D arrays?
Many other features would have be adapted, e.g. serialization.
And what would
??? x = {new int[1], new int[2]};
be? An old-style 2D int[][]? What about interoperability?
I guess, it's all doable, but there are simpler and more important things missing from Java. Some people need 2D arrays all the time, but many can hardly remember when they used any array at all.
I am unable to reproduce the performance benefits you claim. Specifically, the test program:
public abstract class Benchmark {
final String name;
public Benchmark(String name) {
this.name = name;
}
abstract int run(int iterations) throws Throwable;
private BigDecimal time() {
try {
int nextI = 1;
int i;
long duration;
do {
i = nextI;
long start = System.nanoTime();
run(i);
duration = System.nanoTime() - start;
nextI = (i << 1) | 1;
} while (duration < 1000000000 && nextI > 0);
return new BigDecimal((duration) * 1000 / i).movePointLeft(3);
} catch (Throwable e) {
throw new RuntimeException(e);
}
}
#Override
public String toString() {
return name + "\t" + time() + " ns";
}
public static void main(String[] args) throws Exception {
final int[] flat = new int[100*100*100];
final int[][][] multi = new int[100][100][100];
Random chaos = new Random();
for (int i = 0; i < flat.length; i++) {
flat[i] = chaos.nextInt();
}
for (int i=0; i<multi.length; i++)
for (int j=0; j<multi[0].length; j++)
for (int k=0; k<multi[0][0].length; k++)
multi[i][j][k] = chaos.nextInt();
Benchmark[] marks = {
new Benchmark("flat") {
#Override
int run(int iterations) throws Throwable {
long total = 0;
for (int j = 0; j < iterations; j++)
for (int i = 0; i < flat.length; i++)
total += flat[i];
return (int) total;
}
},
new Benchmark("multi") {
#Override
int run(int iterations) throws Throwable {
long total = 0;
for (int iter = 0; iter < iterations; iter++)
for (int i=0; i<multi.length; i++)
for (int j=0; j<multi[0].length; j++)
for (int k=0; k<multi[0][0].length; k++)
total+=multi[i][j][k];
return (int) total;
}
},
new Benchmark("multi (idiomatic)") {
#Override
int run(int iterations) throws Throwable {
long total = 0;
for (int iter = 0; iter < iterations; iter++)
for (int[][] a : multi)
for (int[] b : a)
for (int c : b)
total += c;
return (int) total;
}
}
};
for (Benchmark mark : marks) {
System.out.println(mark);
}
}
}
run on my workstation with
java version "1.8.0_05"
Java(TM) SE Runtime Environment (build 1.8.0_05-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode)
prints
flat 264360.217 ns
multi 270303.246 ns
multi (idiomatic) 266607.334 ns
That is, we observe a mere 3% difference between the one-dimensional and the multi-dimensional code you provided. This difference drops to 1% if we use idiomatic Java (specifically, an enhanced for loop) for traversal (probably because bounds checking is performed on the same array object the loop dereferences, enabling the just in time compiler to elide bounds checking more completely).
Performance therefore seems an inadequate justification for increasing the complexity of the language. Specifically, to support true multi dimensional arrays, the Java programming language would have to distinguish between arrays of arrays, and multidimensional arrays.
Likewise, programmers would have to distinguish between them, and be aware of their differences. API designers would have to ponder whether to use an array of arrays, or a multidimensional array. The compiler, class file format, class file verifier, interpreter, and just in time compiler would have to be extended. This would be particularly difficult, because multidimensional arrays of different dimension counts would have an incompatible memory layout (because the size of their dimensions must be stored to enable bounds checking), and can therefore not be subtypes of each other. As a consequence, the methods of class java.util.Arrays would likely have to be duplicated for each dimension count, as would all otherwise polymorphic algorithms working with arrays.
To conclude, extending Java to support multidimensional arrays would offer negligible performance gain for most programs, but require non-trivial extensions to its type system, compiler and runtime environment. Introducing them would therefore have been at odds with the design goals of the Java programming language, specifically that it be simple.
Since this question is to a great extent about performance, let me contribute a proper JMH-based benchmark. I have also changed some things to make your example both simpler and the performance edge more prominent.
In my case I compare a 1D array with a 2D-array, and use a very short inner dimension. This is the worst case for the cache.
I have tried with both long and int accumulator and saw no difference between them. I submit the version with int.
#OutputTimeUnit(TimeUnit.NANOSECONDS)
#BenchmarkMode(Mode.AverageTime)
#OperationsPerInvocation(X*Y)
#Warmup(iterations = 30, time = 100, timeUnit=MILLISECONDS)
#Measurement(iterations = 5, time = 1000, timeUnit=MILLISECONDS)
#State(Scope.Thread)
#Threads(1)
#Fork(1)
public class Measure
{
static final int X = 100_000, Y = 10;
private final int[] single = new int[X*Y];
private final int[][] multi = new int[X][Y];
#Setup public void setup() {
final ThreadLocalRandom rnd = ThreadLocalRandom.current();
for (int i=0; i<single.length; i++) single[i] = rnd.nextInt();
for (int i=0; i<multi.length; i++)
for (int j=0; j<multi[0].length; j++)
multi[i][j] = rnd.nextInt();
}
#Benchmark public long sumSingle() { return sumSingle(single); }
#Benchmark public long sumMulti() { return sumMulti(multi); }
public static long sumSingle(int[] arr) {
int total = 0;
for (int i=0; i<arr.length; i++)
total+=arr[i];
return total;
}
public static long sumMulti(int[][] arr) {
int total = 0;
for (int i=0; i<arr.length; i++)
for (int j=0; j<arr[0].length; j++)
total+=arr[i][j];
return total;
}
}
The difference in performance is larger than what you have measured:
Benchmark Mode Samples Score Score error Units
o.s.Measure.sumMulti avgt 5 1,356 0,121 ns/op
o.s.Measure.sumSingle avgt 5 0,421 0,018 ns/op
That's a factor above three. (Note that the timing is reported per array element.)
I also note that there is no warmup involved: the first 100 ms are as fast as the rest. Apparently this is such a simple task that the interpreter already does all it takes to make it optimal.
Update
Changing sumMulti's inner loop to
for (int j=0; j<arr[i].length; j++)
total+=arr[i][j];
(note arr[i].length) resulted in a significant speedup, as predicted by maaartinus. Using arr[0].length makes it impossible to eliminate the index range check. Now the results are as follows:
Benchmark Mode Samples Score Error Units
o.s.Measure.sumMulti avgt 5 0,992 ± 0,066 ns/op
o.s.Measure.sumSingle avgt 5 0,424 ± 0,046 ns/op
If you want a fast implementation of a true multi-dimentional array you could write a custom implementation like this. But you are right... it is not as crisp as the array notation. Although, a neat implementation could be quite friendly.
public class MyArray{
private int rows = 0;
private int cols = 0;
String[] backingArray = null;
public MyArray(int rows, int cols){
this.rows = rows;
this.cols = cols;
backingArray = new String[rows*cols];
}
public String get(int row, int col){
return backingArray[row*cols + col];
}
... setters and other stuff
}
Why is it not the default implementation?
The designers of Java probably had to decide how the default notation of the usual C array syntax would behave. They had a single array notation which could either implement arrays-of-arrays or true multi-dimentional arrays.
I think early Java designers were really concerned with Java being safe. Lot of decisions seem to have been taken to make it difficult for the average programmer(or a good programmer on a bad day) to not mess up something . With true multi-dimensional arrays, it is easier for users to waste large chunks of memory by allocating blocks where they are not useful.
Also, from Java's embedded systems roots, they probably found that it was more likely to find pieces of memory to allocate rather than large chunks of memory required for true multi-dimentional objects.
Of course, the flip side is that places where multi-dimensional arrays really make sense suffer. And you are forced to use a library and messy looking code to get your work done.
Why is it still not included in the language?
Even today, true multi-dimensional arrays are a risk from the the point of view of possible of memory wastage/misuse.

Is the performance/memory benefit of short nullified by downcasting?

I'm writing a large scale application where I'm trying to conserve as much memory as possible as well as boost performance. As such, when I have a field that I know is only going to have values from 0 - 10 or from -100 - 100, I try to use the short data type instead of int.
What this means for the rest of the code, though, is that all over the place when I call these functions, I have to downcast simple ints into shorts. For example:
Method Signature
public void coordinates(short x, short y) ...
Method Call
obj.coordinates((short) 1, (short) 2);
It's like that all throughout my code because the literals are treated as ints and aren't being automatically downcast or typed based on the function parameters.
As such, is any performance or memory gain actually significant once this downcasting occurs? Or is the conversion process so efficient that I can still pick up some gains?
There is no performance benefit of using short versus int on 32-bit platforms, in all but the case of short[] versus int[] - and even then the cons usually outweigh the pros.
Assuming you're running on either x64, x86 or ARM-32:
When in use, 16-bit SHORTs are stored in integer registers which are either 32-bit or 64-bits long, just the same as ints. I.e. when the short is in use, you gain no memory or performance benefit versus an int.
When on the stack, 16-bit SHORTs are stored in 32-bit or 64-bit "slots" in order to keep the stack aligned (just like ints). You gain no performance or memory benefit from using SHORTs versus INTs for local variables.
When being passed as parameters, SHORTs are auto-widened to 32-bit or 64-bit when they are pushed on the stack (unlike ints which are just pushed). Your code here is actually slightly less performance and has a slightly bigger (code) memory footprint than if you used ints.
When storing global (static) variables, these variables are automatically expanded to take up 32-bit or 64-bit slots to guarantee alignment of pointers (references). This means you get no performance or memory benefit for using SHORTs versus INTs for global (static) variables.
When storing fields, these live in a structure in heap memory that maps to the layout of the class. In this class, fields are automatically padded to 32-bit or 64-bit to maintain the alignment of fields on the heap. You get no performance or memory benefit by using SHORTs for fields versus INTs.
The only benefit you'll ever see for using SHORTs versus INTs is in the case where you allocate an array of them. In this case, an array of N shorts is roughly half as long as an array of N ints.
Other than the performance benefit caused by having variables in a hot loop together in the case of complex but localized math within a large array of shorts, you'll never see a benefit for using SHORTS versus INTs.
In ALL other cases - such as shorts being used for fields, globals, parameters and locals, other than the number of bits that it can store, there is no difference between a SHORT and an INT.
My advice as always is to recommend that before making your code more difficult to read, and more artificially restricted, try BENCHMARKING your code to see where the memory and CPU bottlenecks are, and then tackle those.
I strongly suspect that if you ever come across the case where your app is suffering from use of ints rather than shorts, then you'll have long since ditched Java for a less memory/CPU hungry runtime anyway, so doing all of this work upfront is wasted effort.
As far as I can see, the casts per se should have no runtime costs (whether using short instead of int actually improves performance is debatable, and depends on the specifics of your application).
Consider the following:
public class Main {
public static void f(short x, short y) {
}
public static void main(String args[]) {
final short x = 1;
final short y = 2;
f(x, y);
f((short)1, (short)2);
}
}
The last two lines of main() compile to:
// f(x, y)
4: iconst_1
5: iconst_2
6: invokestatic #21 // Method f:(SS)V
// f((short)1, (short)2);
9: iconst_1
10: iconst_2
11: invokestatic #21 // Method f:(SS)V
As you can see, they are identical. The casts happen at compile time.
The type casting from int literal to short occurs at compile time and has no runtime performance impact.
You need a way to check the effect of your type choices on the memory use. If short vs. int in a given situation is going to gain performance through lower memory footprint, the effect on memory should be measurable.
Here is a simple method for measuring the amount of memory in use:
private static long inUseMemory() {
Runtime rt = Runtime.getRuntime();
rt.gc();
final long memory = rt.totalMemory() - rt.freeMemory();
return memory;
}
I am also including an example of a program using that method to check memory use in some common situations. The memory increase for allocating an array of a million shorts confirms that short arrays use two bytes per element. The memory increases for the various object arrays indicate that changing the type of one or two fields makes little difference.
Here is the output from one run. YMMV.
Before short[1000000] allocation: In use: 162608 Change 162608
After short[1000000] allocation: In use: 2162808 Change 2000200
After TwoShorts[1000000] allocation: In use: 34266200 Change 32103392
After NoShorts[1000000] allocation: In use: 58162560 Change 23896360
After TwoInts[1000000] allocation: In use: 90265920 Change 32103360
Dummy to keep arrays live -378899459
The rest of this article is the program source:
public class Test {
private static int BIG = 1000000;
private static long oldMemory = 0;
public static void main(String[] args) {
short[] megaShort;
NoShorts[] megaNoShorts;
TwoShorts[] megaTwoShorts;
TwoInts[] megaTwoInts;
System.out.println("Before short[" + BIG + "] allocation: "
+ memoryReport());
megaShort = new short[BIG];
System.out
.println("After short[" + BIG + "] allocation: " + memoryReport());
megaTwoShorts = new TwoShorts[BIG];
for (int i = 0; i < BIG; i++) {
megaTwoShorts[i] = new TwoShorts();
}
System.out.println("After TwoShorts[" + BIG + "] allocation: "
+ memoryReport());
megaNoShorts = new NoShorts[BIG];
for (int i = 0; i < BIG; i++) {
megaNoShorts[i] = new NoShorts();
}
System.out.println("After NoShorts[" + BIG + "] allocation: "
+ memoryReport());
megaTwoInts = new TwoInts[BIG];
for (int i = 0; i < BIG; i++) {
megaTwoInts[i] = new TwoInts();
}
System.out.println("After TwoInts[" + BIG + "] allocation: "
+ memoryReport());
System.out.println("Dummy to keep arrays live "
+ (megaShort[0] + megaTwoShorts[0].hashCode() + megaNoShorts[0]
.hashCode() + megaTwoInts[0].hashCode()));
}
private static long inUseMemory() {
Runtime rt = Runtime.getRuntime();
rt.gc();
final long memory = rt.totalMemory() - rt.freeMemory();
return memory;
}
private static String memoryReport() {
long newMemory = inUseMemory();
String result = "In use: " + newMemory + " Change "
+ (newMemory - oldMemory);
oldMemory = newMemory;
return result;
}
}
class NoShorts {
//char a, b, c;
}
class TwoShorts {
//char a, b, c;
short s, t;
}
class TwoInts {
//char a, b, c;
int s, t;
}
First I want to confirm the memory savings as I saw some doubts raised. Per the documentation of short in tutorial here : http://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html
short: The short data type is a 16-bit signed two's complement integer. It has a minimum value of -32,768 and a maximum value of 32,767 (inclusive). As with byte, the same guidelines apply: you can use a short to save memory in large arrays, in situations where the memory savings actually matters.
By using short You do save the memory in large arrays(hope that is the case) hence its good idea to use it.
Now to your question:
Is the performance/memory benefit of short nullified by downcasting?
Short Answer is NO. Down casting from int to short happens at compilation time itself hence no down impact from performance perspective, but since you are saving memory, it may result in the better performance in memory threshold scenarios.

How to add two java.lang.Numbers?

I have two Numbers. Eg:
Number a = 2;
Number b = 3;
//Following is an error:
Number c = a + b;
Why arithmetic operations are not supported on Numbers? Anyway how would I add these two numbers in java? (Of course I'm getting them from somewhere and I don't know if they are Integer or float etc).
You say you don't know if your numbers are integer or float... when you use the Number class, the compiler also doesn't know if your numbers are integers, floats or some other thing. As a result, the basic math operators like + and - don't work; the computer wouldn't know how to handle the values.
START EDIT
Based on the discussion, I thought an example might help. Computers store floating point numbers as two parts, a coefficient and an exponent. So, in a theoretical system, 001110 might be broken up as 0011 10, or 32 = 9. But positive integers store numbers as binary, so 001110 could also mean 2 + 4 + 8 = 14. When you use the class Number, you're telling the computer you don't know if the number is a float or an int or what, so it knows it has 001110 but it doesn't know if that means 9 or 14 or some other value.
END EDIT
What you can do is make a little assumption and convert to one of the types to do the math. So you could have
Number c = a.intValue() + b.intValue();
which you might as well turn into
Integer c = a.intValue() + b.intValue();
if you're willing to suffer some rounding error, or
Float c = a.floatValue() + b.floatValue();
if you suspect that you're not dealing with integers and are okay with possible minor precision issues. Or, if you'd rather take a small performance blow instead of that error,
BigDecimal c = new BigDecimal(a.floatValue()).add(new BigDecimal(b.floatValue()));
It would also work to make a method to handle the adding for you. Now I do not know the performance impact this will cause but I assume it will be less than using BigDecimal.
public static Number addNumbers(Number a, Number b) {
if(a instanceof Double || b instanceof Double) {
return a.doubleValue() + b.doubleValue();
} else if(a instanceof Float || b instanceof Float) {
return a.floatValue() + b.floatValue();
} else if(a instanceof Long || b instanceof Long) {
return a.longValue() + b.longValue();
} else {
return a.intValue() + b.intValue();
}
}
The only way to correctly add any two types of java.lang.Number is:
Number a = 2f; // Foat
Number b = 3d; // Double
Number c = new BigDecimal( a.toString() ).add( new BigDecimal( b.toString() ) );
This works even for two arguments with a different number-type. It will (should?) not produce any sideeffects like overflows or loosing precision, as far as the toString() of the number-type does not reduce precision.
java.lang.Number is just the superclass of all wrapper classes of primitive types (see java doc). Use the appropriate primitive type (double, int, etc.) for your purpose, or the respective wrapper class (Double, Integer, etc.).
Consider this:
Number a = 1.5; // Actually Java creates a double and boxes it into a Double object
Number b = 1; // Same here for int -> Integer boxed
// What should the result be? If Number would do implicit casts,
// it would behave different from what Java usually does.
Number c = a + b;
// Now that works, and you know at first glance what that code does.
// Nice explicit casts like you usually use in Java.
// The result is of course again a double that is boxed into a Double object
Number d = a.doubleValue() + (double)b.intValue();
Use the following:
Number c = a.intValue() + b.intValue(); // Number is an object and not a primitive data type.
Or:
int a = 2;
int b = 3;
int c = 2 + 3;
I think there are 2 sides to your question.
Why is operator+ not supported on Number?
Because the Java language spec. does not specify this, and there is no operator overloading. There is also not a compile-time natural way to cast the Number to some fundamental type, and there is no natural add to define for some type of operations.
Why are basic arithmic operations not supported on Number?
(Copied from my comment:)
Not all subclasses can implement this in a way you would expect. Especially with the Atomic types it's hard to define a usefull contract for e.g. add.
Also, a method add would be trouble if you try to add a Long to a Short.
If you know the Type of one number but not the other it is possible to do something like
public Double add(Double value, Number increment) {
return value + Double.parseDouble(increment.toString());
}
But it can be messy, so be aware of potential loss of accuracy and NumberFormatExceptions
Number is an abstract class which you cannot make an instance of. Provided you have a correct instance of it, you can get number.longValue() or number.intValue() and add them.
First of all, you should be aware that Number is an abstract class. What happens here is that when you create your 2 and 3, they are interpreted as primitives and a subtype is created (I think an Integer) in that case. Because an Integer is a subtype of Number, you can assign the newly created Integer into a Number reference.
However, a number is just an abstraction. It could be integer, it could be floating point, etc., so the semantics of math operations would be ambiguous.
Number does not provide the classic map operations for two reasons:
First, member methods in Java cannot be operators. It's not C++. At best, they could provide an add()
Second, figuring out what type of operation to do when you have two inputs (e.g., a division of a float by an int) is quite tricky.
So instead, it is your responsibility to make the conversion back to the specific primitive type you are interested in it and apply the mathematical operators.
The best answer would be to make util with double dispatch drilling down to most known types (take a look at Smalltalk addtition implementation)

Categories