The TL;DR version, for those who don't want the background, is the following specific question:
Question
Why doesn't Java have an implementation of true multidimensional arrays? Is there a solid technical reason? What am I missing here?
Background
Java has multidimensional arrays at the syntax level, in that one can declare
int[][] arr = new int[10][10];
but it seems that this is really not what one might have expected. Rather than having the JVM allocate a contiguous block of RAM big enough to store 100 ints, it comes out as an array of arrays of ints: so each layer is a contiguous block of RAM, but the thing as a whole is not. Accessing arr[i][j] is thus rather slow: the JVM has to
find the int[] stored at arr[i];
index this to find the int stored at arr[i][j].
This involves querying an object to go from one layer to the next, which is rather expensive.
Why Java does this
At one level, it's not hard to see why this can't be optimised to a simple scale-and-add lookup even if it were all allocated in one fixed block. The problem is that arr[3] is a reference all of its own, and it can be changed. So although arrays are of fixed size, we could easily write
arr[3] = new int[11];
and now the scale-and-add is screwed because this layer has grown. You'd need to know at runtime whether everything is still the same size as it used to be. In addition, of course, this will then get allocated somewhere else in RAM (it'll have to be, since it's bigger than what it's replacing), so it's not even in the right place for scale-and-add.
What's problematic about it
It seems to me that this is not ideal, and that for two reasons.
For one, it's slow. A test I ran with these methods for summing the contents of a single dimensional or multidimensional array took nearly twice as long (714 seconds vs 371 seconds) for the multidimensional case (an int[1000000] and an int[100][100][100] respectively, filled with random int values, run 1000000 times with warm cache).
public static long sumSingle(int[] arr) {
long total = 0;
for (int i=0; i<arr.length; i++)
total+=arr[i];
return total;
}
public static long sumMulti(int[][][] arr) {
long total = 0;
for (int i=0; i<arr.length; i++)
for (int j=0; j<arr[0].length; j++)
for (int k=0; k<arr[0][0].length; k++)
total+=arr[i][j][k];
return total;
}
Secondly, because it's slow, it thereby encourages obscure coding. If you encounter something performance-critical that would be naturally done with a multidimensional array, you have an incentive to write it as a flat array, even if that makes the unnatural and hard to read. You're left with an unpalatable choice: obscure code or slow code.
What could be done about it
It seems to me that the basic problem could easily enough be fixed. The only reason, as we saw earlier, that it can't be optimised is that the structure might change. But Java already has a mechanism for making references unchangeable: declare them as final.
Now, just declaring it with
final int[][] arr = new int[10][10];
isn't good enough because it's only arr that is final here: arr[3] still isn't, and could be changed, so the structure might still change. But if we had a way of declaring things so that it was final throughout, except at the bottom layer where the int values are stored, then we'd have an entire immutable structure, and it could all be allocated as one block, and indexed with scale-and-add.
How it would look syntactically, I'm not sure (I'm not a language designer). Maybe
final int[final][] arr = new int[10][10];
although admittedly that looks a bit weird. This would mean: final at the top layer; final at the next layer; not final at the bottom layer (else the int values themselves would be immutable).
Finality throughout would enable the JIT compiler to optimise this to give performance to that of a single dimensional array, which would then take away the temptation to code that way just to get round the slowness of multidimensional arrays.
(I hear a rumour that C# does something like this, although I also hear another rumour that the CLR implementation is so bad that it's not worth having... perhaps they're just rumours...)
Question
So why doesn't Java have an implementation of true multidimensional arrays? Is there a solid technical reason? What am I missing here?
Update
A bizarre side note: the difference in timings drops away to only a few percent if you use an int for the running total rather than a long. Why would there be such a small difference with an int, and such a big difference with a long?
Benchmarking code
Code I used for benchmarking, in case anyone wants to try to reproduce these results:
public class Multidimensional {
public static long sumSingle(final int[] arr) {
long total = 0;
for (int i=0; i<arr.length; i++)
total+=arr[i];
return total;
}
public static long sumMulti(final int[][][] arr) {
long total = 0;
for (int i=0; i<arr.length; i++)
for (int j=0; j<arr[0].length; j++)
for (int k=0; k<arr[0][0].length; k++)
total+=arr[i][j][k];
return total;
}
public static void main(String[] args) {
final int iterations = 1000000;
Random r = new Random();
int[] arr = new int[1000000];
for (int i=0; i<arr.length; i++)
arr[i]=r.nextInt();
long total = 0;
System.out.println(sumSingle(arr));
long time = System.nanoTime();
for (int i=0; i<iterations; i++)
total = sumSingle(arr);
time = System.nanoTime()-time;
System.out.printf("Took %d ms for single dimension\n", time/1000000, total);
int[][][] arrMulti = new int[100][100][100];
for (int i=0; i<arrMulti.length; i++)
for (int j=0; j<arrMulti[i].length; j++)
for (int k=0; k<arrMulti[i][j].length; k++)
arrMulti[i][j][k]=r.nextInt();
System.out.println(sumMulti(arrMulti));
time = System.nanoTime();
for (int i=0; i<iterations; i++)
total = sumMulti(arrMulti);
time = System.nanoTime()-time;
System.out.printf("Took %d ms for multi dimension\n", time/1000000, total);
}
}
but it seems that this is really not what one might have expected.
Why?
Consider that the form T[] means "array of type T", then just as we would expect int[] to mean "array of type int", we would expect int[][] to mean "array of type array of type int", because there's no less reason for having int[] as the T than int.
As such, considering that one can have arrays of any type, it follows just from the way [ and ] are used in declaring and initialising arrays (and for that matter, {, } and ,), that without some sort of special rule banning arrays of arrays, we get this sort of use "for free".
Now consider also that there are things we can do with jagged arrays that we can't do otherwise:
We can have "jagged" arrays where different inner arrays are of different sizes.
We can have null arrays within the outer array where appropriate mapping of the data, or perhaps to allow lazy building.
We can deliberately alias within the array so e.g. lookup[1] is the same array as lookup[5]. (This can allow for massive savings with some data-sets, e.g. many Unicode properties can be mapped for the full set of 1,112,064 code points in a small amount of memory because leaf arrays of properties can be repeated for ranges with matching patterns).
Some heap implementations can handle the many smaller objects better than one large object in memory.
There are certainly cases where these sort of multi-dimensional arrays are useful.
Now, the default state of any feature is unspecified and unimplemented. Someone needs to decide to specify and implement a feature, or else it wouldn't exist.
Since, as shown above, the array-of-array sort of multidimensional array will exist unless someone decided to introduce a special banning array-of-array feature. Since arrays of arrays are useful for the reasons above, that would be a strange decision to make.
Conversely, the sort of multidimensional array where an array has a defined rank that can be greater than 1 and so be used with a set of indices rather than a single index, does not follow naturally from what is already defined. Someone would need to:
Decide on the specification for the declaration, initialisation and use would work.
Document it.
Write the actual code to do this.
Test the code to do this.
Handle the bugs, edge-cases, reports of bugs that aren't actually bugs, backward-compatibility issues caused by fixing the bugs.
Also users would have to learn this new feature.
So, it has to be worth it. Some things that would make it worth it would be:
If there was no way of doing the same thing.
If the way of doing the same thing was strange or not well-known.
People would expect it from similar contexts.
Users can't provide similar functionality themselves.
In this case though:
But there is.
Using strides within arrays was already known to C and C++ programmers and Java built on its syntax so that the same techniques are directly applicable
Java's syntax was based on C++, and C++ similarly only has direct support for multidimensional arrays as arrays-of-arrays. (Except when statically allocated, but that's not something that would have an analogy in Java where arrays are objects).
One can easily write a class that wraps an array and details of stride-sizes and allows access via a set of indices.
Really, the question is not "why doesn't Java have true multidimensional arrays"? But "Why should it?"
Of course, the points you made in favour of multidimensional arrays are valid, and some languages do have them for that reason, but the burden is nonetheless to argue a feature in, not argue it out.
(I hear a rumour that C# does something like this, although I also hear another rumour that the CLR implementation is so bad that it's not worth having... perhaps they're just rumours...)
Like many rumours, there's an element of truth here, but it is not the full truth.
.NET arrays can indeed have multiple ranks. This is not the only way in which it is more flexible than Java. Each rank can also have a lower-bound other than zero. As such, you could for example have an array that goes from -3 to 42 or a two dimensional array where one rank goes from -2 to 5 and another from 57 to 100, or whatever.
C# does not give complete access to all of this from its built-in syntax (you need to call Array.CreateInstance() for lower bounds other than zero), but it does for allow you to use the syntax int[,] for a two-dimensional array of int, int[,,] for a three-dimensional array, and so on.
Now, the extra work involved in dealing with lower bounds other than zero adds a performance burden, and yet these cases are relatively uncommon. For that reason single-rank arrays with a lower-bound of 0 are treated as a special case with a more performant implementation. Indeed, they are internally a different sort of structure.
In .NET multi-dimensional arrays with lower bounds of zero are treated as multi-dimensional arrays whose lower bounds just happen to be zero (that is, as an example of the slower case) rather than the faster case being able to handle ranks greater than 1.
Of course, .NET could have had a fast-path case for zero-based multi-dimensional arrays, but then all the reasons for Java not having them apply and the fact that there's already one special case, and special cases suck, and then there would be two special cases and they would suck more. (As it is, one can have some issues with trying to assign a value of one type to a variable of the other type).
Not a single thing above shows clearly that Java couldn't possibly have had the sort of multi-dimensional array you talk of; it would have been a sensible enough decision, but so also the decision that was made was also sensible.
This should be a question to James Gosling, I suppose. The initial design of Java was about OOP and simplicity, not about speed.
If you have a better idea of how multidimensional arrays should work, there are several ways of bringing it to life:
Submit a JDK Enhancement Proposal.
Develop a new JSR through Java Community Process.
Propose a new Project.
UPD. Of course, you are not the first to question the problems of Java arrays design.
For instance, projects Sumatra and Panama would also benefit from true multidimensional arrays.
"Arrays 2.0" is John Rose's talk on this subject at JVM Language Summit 2012.
To me it looks like you sort of answered the question yourself:
... an incentive to write it as a flat array, even if that makes the unnatural and hard to read.
So write it as a flat array which is easy to read. With a trivial helper like
double get(int row, int col) {
return data[rowLength * row + col];
}
and similar setter and possibly a +=-equivalent, you can pretend you're working with a 2D array. It's really no big deal. You can't use the array notation and everything gets verbose and ugly. But that seems to be the Java way. It's exactly the same as with BigInteger or BigDecimal. You can't use braces for accessing a Map, that's a very similar case.
Now the question is how important all those features are? Would more people be happy if they could write x += BigDecimal.valueOf("123456.654321") + 10;, or spouse["Paul"] = "Mary";, or use 2D arrays without the boilerplate, or what? All of this would be nice and you could go further, e.g., array slices. But there's no real problem. You have to choose between verbosity and inefficiency as in many other cases. IMHO, the effort spent on this feature can be better spent elsewhere. Your 2D arrays are a new best as....
Java actually has no 2D primitive arrays, ...
it's mostly a syntactic sugar, the underlying thing is array of objects.
double[][] a = new double[1][1];
Object[] b = a;
As arrays are reified, the current implementation needs hardly any support. Your implementation would open a can of worms:
There are currently 8 primitive types, which means 9 array types, would a 2D array be the tenth? What about 3D?
There is a single special object header type for arrays. A 2D array could need another one.
What about java.lang.reflect.Array? Clone it for 2D arrays?
Many other features would have be adapted, e.g. serialization.
And what would
??? x = {new int[1], new int[2]};
be? An old-style 2D int[][]? What about interoperability?
I guess, it's all doable, but there are simpler and more important things missing from Java. Some people need 2D arrays all the time, but many can hardly remember when they used any array at all.
I am unable to reproduce the performance benefits you claim. Specifically, the test program:
public abstract class Benchmark {
final String name;
public Benchmark(String name) {
this.name = name;
}
abstract int run(int iterations) throws Throwable;
private BigDecimal time() {
try {
int nextI = 1;
int i;
long duration;
do {
i = nextI;
long start = System.nanoTime();
run(i);
duration = System.nanoTime() - start;
nextI = (i << 1) | 1;
} while (duration < 1000000000 && nextI > 0);
return new BigDecimal((duration) * 1000 / i).movePointLeft(3);
} catch (Throwable e) {
throw new RuntimeException(e);
}
}
#Override
public String toString() {
return name + "\t" + time() + " ns";
}
public static void main(String[] args) throws Exception {
final int[] flat = new int[100*100*100];
final int[][][] multi = new int[100][100][100];
Random chaos = new Random();
for (int i = 0; i < flat.length; i++) {
flat[i] = chaos.nextInt();
}
for (int i=0; i<multi.length; i++)
for (int j=0; j<multi[0].length; j++)
for (int k=0; k<multi[0][0].length; k++)
multi[i][j][k] = chaos.nextInt();
Benchmark[] marks = {
new Benchmark("flat") {
#Override
int run(int iterations) throws Throwable {
long total = 0;
for (int j = 0; j < iterations; j++)
for (int i = 0; i < flat.length; i++)
total += flat[i];
return (int) total;
}
},
new Benchmark("multi") {
#Override
int run(int iterations) throws Throwable {
long total = 0;
for (int iter = 0; iter < iterations; iter++)
for (int i=0; i<multi.length; i++)
for (int j=0; j<multi[0].length; j++)
for (int k=0; k<multi[0][0].length; k++)
total+=multi[i][j][k];
return (int) total;
}
},
new Benchmark("multi (idiomatic)") {
#Override
int run(int iterations) throws Throwable {
long total = 0;
for (int iter = 0; iter < iterations; iter++)
for (int[][] a : multi)
for (int[] b : a)
for (int c : b)
total += c;
return (int) total;
}
}
};
for (Benchmark mark : marks) {
System.out.println(mark);
}
}
}
run on my workstation with
java version "1.8.0_05"
Java(TM) SE Runtime Environment (build 1.8.0_05-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode)
prints
flat 264360.217 ns
multi 270303.246 ns
multi (idiomatic) 266607.334 ns
That is, we observe a mere 3% difference between the one-dimensional and the multi-dimensional code you provided. This difference drops to 1% if we use idiomatic Java (specifically, an enhanced for loop) for traversal (probably because bounds checking is performed on the same array object the loop dereferences, enabling the just in time compiler to elide bounds checking more completely).
Performance therefore seems an inadequate justification for increasing the complexity of the language. Specifically, to support true multi dimensional arrays, the Java programming language would have to distinguish between arrays of arrays, and multidimensional arrays.
Likewise, programmers would have to distinguish between them, and be aware of their differences. API designers would have to ponder whether to use an array of arrays, or a multidimensional array. The compiler, class file format, class file verifier, interpreter, and just in time compiler would have to be extended. This would be particularly difficult, because multidimensional arrays of different dimension counts would have an incompatible memory layout (because the size of their dimensions must be stored to enable bounds checking), and can therefore not be subtypes of each other. As a consequence, the methods of class java.util.Arrays would likely have to be duplicated for each dimension count, as would all otherwise polymorphic algorithms working with arrays.
To conclude, extending Java to support multidimensional arrays would offer negligible performance gain for most programs, but require non-trivial extensions to its type system, compiler and runtime environment. Introducing them would therefore have been at odds with the design goals of the Java programming language, specifically that it be simple.
Since this question is to a great extent about performance, let me contribute a proper JMH-based benchmark. I have also changed some things to make your example both simpler and the performance edge more prominent.
In my case I compare a 1D array with a 2D-array, and use a very short inner dimension. This is the worst case for the cache.
I have tried with both long and int accumulator and saw no difference between them. I submit the version with int.
#OutputTimeUnit(TimeUnit.NANOSECONDS)
#BenchmarkMode(Mode.AverageTime)
#OperationsPerInvocation(X*Y)
#Warmup(iterations = 30, time = 100, timeUnit=MILLISECONDS)
#Measurement(iterations = 5, time = 1000, timeUnit=MILLISECONDS)
#State(Scope.Thread)
#Threads(1)
#Fork(1)
public class Measure
{
static final int X = 100_000, Y = 10;
private final int[] single = new int[X*Y];
private final int[][] multi = new int[X][Y];
#Setup public void setup() {
final ThreadLocalRandom rnd = ThreadLocalRandom.current();
for (int i=0; i<single.length; i++) single[i] = rnd.nextInt();
for (int i=0; i<multi.length; i++)
for (int j=0; j<multi[0].length; j++)
multi[i][j] = rnd.nextInt();
}
#Benchmark public long sumSingle() { return sumSingle(single); }
#Benchmark public long sumMulti() { return sumMulti(multi); }
public static long sumSingle(int[] arr) {
int total = 0;
for (int i=0; i<arr.length; i++)
total+=arr[i];
return total;
}
public static long sumMulti(int[][] arr) {
int total = 0;
for (int i=0; i<arr.length; i++)
for (int j=0; j<arr[0].length; j++)
total+=arr[i][j];
return total;
}
}
The difference in performance is larger than what you have measured:
Benchmark Mode Samples Score Score error Units
o.s.Measure.sumMulti avgt 5 1,356 0,121 ns/op
o.s.Measure.sumSingle avgt 5 0,421 0,018 ns/op
That's a factor above three. (Note that the timing is reported per array element.)
I also note that there is no warmup involved: the first 100 ms are as fast as the rest. Apparently this is such a simple task that the interpreter already does all it takes to make it optimal.
Update
Changing sumMulti's inner loop to
for (int j=0; j<arr[i].length; j++)
total+=arr[i][j];
(note arr[i].length) resulted in a significant speedup, as predicted by maaartinus. Using arr[0].length makes it impossible to eliminate the index range check. Now the results are as follows:
Benchmark Mode Samples Score Error Units
o.s.Measure.sumMulti avgt 5 0,992 ± 0,066 ns/op
o.s.Measure.sumSingle avgt 5 0,424 ± 0,046 ns/op
If you want a fast implementation of a true multi-dimentional array you could write a custom implementation like this. But you are right... it is not as crisp as the array notation. Although, a neat implementation could be quite friendly.
public class MyArray{
private int rows = 0;
private int cols = 0;
String[] backingArray = null;
public MyArray(int rows, int cols){
this.rows = rows;
this.cols = cols;
backingArray = new String[rows*cols];
}
public String get(int row, int col){
return backingArray[row*cols + col];
}
... setters and other stuff
}
Why is it not the default implementation?
The designers of Java probably had to decide how the default notation of the usual C array syntax would behave. They had a single array notation which could either implement arrays-of-arrays or true multi-dimentional arrays.
I think early Java designers were really concerned with Java being safe. Lot of decisions seem to have been taken to make it difficult for the average programmer(or a good programmer on a bad day) to not mess up something . With true multi-dimensional arrays, it is easier for users to waste large chunks of memory by allocating blocks where they are not useful.
Also, from Java's embedded systems roots, they probably found that it was more likely to find pieces of memory to allocate rather than large chunks of memory required for true multi-dimentional objects.
Of course, the flip side is that places where multi-dimensional arrays really make sense suffer. And you are forced to use a library and messy looking code to get your work done.
Why is it still not included in the language?
Even today, true multi-dimensional arrays are a risk from the the point of view of possible of memory wastage/misuse.
Related
Which of the following is faster in Java? Is there any other way that is faster than any of these?
int[][] matrix = new int[50][50];
for (int k=0;k<10;k++){
// some calculations here before resetting the array to zero
for (int i = 0; i <50; i++) {
for (int j = 0; j <50; j++) {
matrix[i][j]=0;
}
}
}
Or this:
int[][] matrix = new int[50][50];
for (int k=0;k<10;k++){
// some calculations here before resetting the array to zero
matrix = new int[50][50];
}
The fastest way to perform an action that leaves the variable, "matrix" in an equivalent state at the end of a run is int[][] matrix = new int[50][50];
However, none of these solutions are equivalent in terms of number of operations or memory thrash. The statement I've provided is what you are looking for.
Update: With your updated question where you are manipulating matrix and then resetting it.
Your second example will likely be faster on each iteration. The thought being that it is faster to allocate memory than iterate and set a variable 50^2 times. Though this is a question for a profiler. In general, zeroing out memory is something that is better optimized by the JVM than your application.
This being said, it is important to remember than memory allocation is not without caveats in extreme scenarios. If you allocate and trash memory too often, you may have a suboptimal GC experience.
I have this requirement that I need to set the values in a byte array of size 20MB.
I'm looking for a JAVA API which does the following. I've gone through apache commons arrayutils but couldn't find something useful.
The operation should be something of this type. Say the values range from 0 to 100.
I'd like to manipulate the array such that values less than 15 are changed to 15 and values greater than 70 are changed to 70.
Basically, I'm looking for an operation which would avoid me doing this - iterate through the array, check if the value is below 15, if it is below 15 then set it to 15 otherwise is it above 75, if it is above 75 then set the value to 75.
Any help is appreciated.
Even if there's some third-party library which has this functionality, it's just going to be doing exactly the same operation - looping over an array. Fundamentally you need something like:
for (int i = 0; i < array.length; i++)
{
array[i] = clamp(array[i], 15, 70);
}
...
public static byte clamp(byte value, byte min, byte max)
{
return value < min ? min
: value > max ? max
: value;
}
You could implement this in native code if you really wanted, but I suspect you won't find an existing implementation. It's more likely that there are libraries which perform the sort of image manipulation you're interested in as image manipulation rather than as an array operation.
You could use Guava's Lists.transform method to update the values. However, this would result in a new array not updating the values in the existing array.
List<Byte> list = Lists.newArrayList(myArray);
List<Byte> trans = Lists.transform(list, new Function<Byte, Byte>(){...});
byte[] bytes = Bytes.toArray(trans);
However, given what you are trying to do, I would suggest just looping over the values.
I'd recommend that you write the simple loop, and profile it in the context of your application. Only if you can demonstrate that this code is the overall bottleneck it would make sense to try and make it faster.
I'd try something like this:
final int n = array.length;
for (int i = 0; i < n; i++) {
int val = array[i];
if (val < 15) {
array[i] = 15;
} else if (val > 75) {
array[i] = 75;
}
}
My final point is that this type of code is likely to be limited by memory bandwidth, so it seems unlikely that a native C solution would be a lot faster anyway.
Instead of checking the ranges like Jon skeet proposes, you could create a lookup table for each of the 256 possible a byte could have, i.e. something like
{15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,16,17,18,...,69,70,70,70,70,...}
for (int i = 0; i < len; i++)
{
array[i] = lookup[array[i]];
}
In C: Less branching, much faster. In Java: Unfortunately not faster, even a bit slower, maybe because Java's array range checks eat up the speed gained; and since Java's bytes are always signed, it's a bit more complicated than shown above.
In C, you could even do that for 16bit halfwords, making it faster again. (Probably by factor 2)
EDIT: To my own shame, I must admit that proper testing revealed that the lookup table isn't faster in C. My first results were probably skewed by compiler optimisations. Anyway, at least on my machine,
if (array[i]<15) array[i]=15;
else if (array[i]>70) array[i]=70;
is noticable faster then using the ternary operator.
I'm doing the following on my code:
double[][] temp=new double[0][2];
The program will run with no runtime exceptions. When I get the length of the temp like this temp.length it returns 0 and when I tried accessing the length of the inner arrays like this temp[0].length it always throws an ArrayIndexOutOfBoundsException. (That was only a test.)
Now I am wondering, Java did create a array with 0 length and at the same time an inner array with a length of 2 in an array with 0 length?
Did this kind of declaration has implications on memory management?
Will it develop complications on the coding and running the code?
Did Java really permit this kind of declaration?
In what sense did they permit this kind of declaration or did they just overlook this kind of situation?
And if they permit this declaration does it also has some special uses?
I was just exploring the possibility of doing this kind of declaration and had been questioning myself if this is really permissible.
Your opinions are gladly appreciated.
It is equivalent to
double[][] temp = new double[0][]; // a zero length array of double[]
for(int d=0; d<0; d++)
temp[d] = new double[2]; // whose each element is a new double[2]
of course the loop isn't executed, so there's no waste from "inner array"
see 15.10.1 Run-time Evaluation of Array Creation Expressions (JLS 3 - CHAPTER 15 Expressions)
If an array creation expression
contains N DimExpr expressions, then
it effectively executes a set of
nested loops of depth N-1 to create
the implied arrays of arrays. For
example, the declaration:
float[][] matrix = new float[3][3];
is equivalent in behavior to:
float[][] matrix = new float[3][];
for (int d = 0; d < matrix.length; d++)
matrix[d] = new float[3];
,so
double[][] temp=new double[0][2];
will be equivalent to
double[][] matrix = new double[0][];
for (int d = 0; d < 0; d++)
matrix[d] = new double[2];//would newer hepened
The only valid scenario ,I can think of is where you want to send and empty 2 dimensional array.
double[][] temp = new double[0][0];
return temp;
The above is a valid requirement in many matrix calculations.
Did this kind of declaration has
implications on memory management?
Not sure. And might also depends on the JVM to JVM implementations.
Will it develop complications on the
coding and running the code?
It should not if you are accessing the array in a loop like this
for(int i = 0; i<temp.length;i++)
for(int j=0; j<temp[i].length;j++)
{
// your code
}
Otherwise if you are accessing directly by using index then you should first check the index bounds.
Did Java really permit this kind of
declaration?
Yes. As I have said in the first statement.
In what sense did they permit this
kind of declaration or did they just
overlook this kind of situation?
As said before: A valid scenario is where you want to send and empty 2 dimensional array
There might be other scenarios.
And if they permit this declaration
does it also has some special uses?
Other than the my last answer I am not sure of any other scenario. But would love to know if they exist.
I need to pass an x/y around. I was just using java.awt.Point. I do this a lot considering it's the nature of the app, but tons slower then normal arrays. I also tried to create my own "FastPoint" which is just an int x/y and very simple class constructor, that's really slow too.
Time is in millescond.
java.awt.Point: 10374
FastPoint: 10032
Arrays: 1210
public class FastPoint {
public int x;
public int y;
public FastPoint(int x, int y) {
this.x = x;
this.y = y;
}
}
Jvisualvm says Point (either awt or my own) are using tons of memory compared to simple int[] array.
I guess that's just overhead from having to create an object instead of a um, basic type? Any way to tweak or optimize this Point class? I've already switched to basic int arrays (which is tons faster now), but just trying to understand why this is slow and if there is anything I can do about it?
Test Code:
for (int i = 0; i < maxRuns; i++) {
point = new Point(i,i);
}
for (int i = 0; i < maxRuns; i++) {
a[0] = i; a[1] = i;
}
Your test harness is biased: You create a new point in each iteration, but create the array just once. If you move the array allocation into the loop, the difference is not as big, and arrays are actually slightly slower:
Point: 19 nano seconds / iteration
Array: 47 nano seconds / iteration
This is as expected, since array accesses need to perform bounds checking, but field assignment doesn't (the JIT has apparently inlined the point constructor).
Also note that instrumenting a virtual machine for cpu profiling incurs additional overhead, which can - in some cases drastically - change the performance behaviour of the application under test.
I am reading a text file which contains numbers in the range [1, 10^100]. I am then performing a sequence of arithmetic operations on each number. I would like to use a BigInteger only if the number is out of the int/long range. One approach would be to count how many digits there are in the string and switch to BigInteger if there are too many. Otherwise I'd just use primitive arithmetic as it is faster. Is there a better way?
Is there any reason why Java could not do this automatically i.e. switch to BigInteger if an int was too small? This way we would not have to worry about overflows.
I suspect the decision to use primitive values for integers and reals (done for performance reasons) made that option not possible. Note that Python and Ruby both do what you ask.
In this case it may be more work to handle the smaller special case than it is worth (you need some custom class to handle the two cases), and you should just use BigInteger.
Is there any reason why Java could not do this automatically i.e. switch to BigInteger if an int was too small?
Because that is a higher level programming behavior than what Java currently is. The language is not even aware of the BigInteger class and what it does (i.e. it's not in JLS). It's only aware of Integer (among other things) for boxing and unboxing purposes.
Speaking of boxing/unboxing, an int is a primitive type; BigInteger is a reference type. You can't have a variable that can hold values of both types.
You could read the values into BigIntegers, and then convert them to longs if they're small enough.
private final BigInteger LONG_MAX = BigInteger.valueOf(Long.MAX_VALUE);
private static List<BigInteger> readAndProcess(BufferedReader rd) throws IOException {
List<BigInteger> result = new ArrayList<BigInteger>();
for (String line; (line = rd.readLine()) != null; ) {
BigInteger bignum = new BigInteger(line);
if (bignum.compareTo(LONG_MAX) > 0) // doesn't fit in a long
result.add(bignumCalculation(bignum));
else result.add(BigInteger.valueOf(primitiveCalculation(bignum.longValue())));
}
return result;
}
private BigInteger bignumCalculation(BigInteger value) {
// perform the calculation
}
private long primitiveCalculation(long value) {
// perform the calculation
}
(You could make the return value a List<Number> and have it a mixed collection of BigInteger and Long objects, but that wouldn't look very nice and wouldn't improve performance by a lot.)
The performance may be better if a large amount of the numbers in the file are small enough to fit in a long (depending on the complexity of calculation). There's still risk for overflow depending on what you do in primitiveCalculation, and you've now repeated the code, (at least) doubling the bug potential, so you'll have to decide if the performance gain really is worth it.
If your code is anything like my example, though, you'd probably have more to gain by parallelizing the code so the calculations and the I/O aren't performed on the same thread - you'd have to do some pretty heavy calculations for an architecture like that to be CPU-bound.
The impact of using BigDecimals when something smaller will suffice is surprisingly, err, big: Running the following code
public static class MyLong {
private long l;
public MyLong(long l) { this.l = l; }
public void add(MyLong l2) { l += l2.l; }
}
public static void main(String[] args) throws Exception {
// generate lots of random numbers
long ls[] = new long[100000];
BigDecimal bds[] = new BigDecimal[100000];
MyLong mls[] = new MyLong[100000];
Random r = new Random();
for (int i=0; i<ls.length; i++) {
long n = r.nextLong();
ls[i] = n;
bds[i] = new BigDecimal(n);
mls[i] = new MyLong(n);
}
// time with longs & Bigints
long t0 = System.currentTimeMillis();
for (int j=0; j<1000; j++) for (int i=0; i<ls.length-1; i++) {
ls[i] += ls[i+1];
}
long t1 = Math.max(t0 + 1, System.currentTimeMillis());
for (int j=0; j<1000; j++) for (int i=0; i<ls.length-1; i++) {
bds[i].add(bds[i+1]);
}
long t2 = System.currentTimeMillis();
for (int j=0; j<1000; j++) for (int i=0; i<ls.length-1; i++) {
mls[i].add(mls[i+1]);
}
long t3 = System.currentTimeMillis();
// compare times
t3 -= t2;
t2 -= t1;
t1 -= t0;
DecimalFormat df = new DecimalFormat("0.00");
System.err.println("long: " + t1 + "ms, bigd: " + t2 + "ms, x"
+ df.format(t2*1.0/t1) + " more, mylong: " + t3 + "ms, x"
+ df.format(t3*1.0/t1) + " more");
}
produces, on my system, this output:
long: 375ms, bigd: 6296ms, x16.79 more, mylong: 516ms, x1.38 more
The MyLong class is there only to look at the effects of boxing, to compare against what you would get with a custom BigOrLong class.
Java is Fast--really really Fast. It's only 2-4x slower than c and sometimes as fast or a tad faster where most other languages are 10x (python) to 100x (ruby) slower than C/Java. (Fortran is also hella-fast, by the way)
Part of this is because it doesn't do things like switch number types for you. It could, but currently it can inline an operation like "a*5" in just a few bytes, imagine the hoops it would have to go through if a was an object. It would at least be a dynamic call to a's multiply method which would be a few hundred / thousand times slower than it was when a was simply an integer value.
Java probably could, these days, actually use JIT compiling to optimize the call better and inline it at runtime, but even then very few library calls support BigInteger/BigDecimal so there would be a LOT of native support, it would be a completely new language.
Also imagine how switching from int to BigInteger instead of long would make debugging video games crazy-hard! (Yeah, every time we move to the right side of the screen the game slows down by 50x, the code is all the same! How is this possible?!??)
Would it have been possible? Yes. But there are many problems with it.
Consider, for instance, that Java stores references to BigInteger, which is actually allocated on the heap, but store int literals. The difference can be made clear in C:
int i;
BigInt* bi;
Now, to automatically go from a literal to a reference, one would necessarily have to annotate the literal somehow. For instance, if the highest bit of the int was set, then the other bits could be used as a table lookup of some sort to retrieve the proper reference. That also means you'd get a BigInt** bi whenever it overflowed into that.
Of course, that's the bit usually used for sign, and hardware instructions pretty much depend on it. Worse still, if we do that, then the hardware won't be able to detect overflow and set the flags to indicate it. As a result, each operation would have to be accompanied by some test to see if and overflow has happened or will happen (depending on when it can be detected).
All that would add a lot of overhead to basic integer arithmetic, which would in practice negate any benefits you had to begin with. In other words, it is faster to assume BigInt than it is to try to use int and detect overflow conditions while at the same time juggling with the reference/literal problem.
So, to get any real advantage, one would have to use more space to represent ints. So instead of storing 32 bits in the stack, in the objects, or anywhere else we use them, we store 64 bits, for example, and use the additional 32 bits to control whether we want a reference or a literal. That could work, but there's an obvious problem with it -- space usage. :-) We might see more of it with 64 bits hardware, though.
Now, you might ask why not just 40 bits (32 bits + 1 byte) instead of 64? Basically, on modern hardware it is preferable to store stuff in 32 bits increments for performance reasons, so we'll be padding 40 bits to 64 bits anyway.
EDIT
Let's consider how one could go about doing this in C#. Now, I have no programming experience with C#, so I can't write the code to do it, but I expect I can give an overview.
The idea is to create a struct for it. It should look roughly like this:
public struct MixedInt
{
private int i;
private System.Numeric.BigInteger bi;
public MixedInt(string s)
{
bi = BigInteger.Parse(s);
if (parsed <= int.MaxValue && parsed => int.MinValue)
{
i = (int32) parsed;
bi = 0;
}
}
// Define all required operations
}
So, if the number is in the integer range we use int, otherwise we use BigInteger. The operations have to ensure transition from one to another as required/possible. From the client point of view, this is transparent. It's just one type MixedInt, and the class takes care of using whatever fits better.
Note, however, that this kind of optimization may well be part of C#'s BigInteger already, given it's implementation as a struct.
If Java had something like C#'s struct, we could do something like this in Java as well.
Is there any reason why Java could not
do this automatically i.e. switch to
BigInteger if an int was too small?
This is one of the advantage of dynamic typing, but Java is statically typed and prevents this.
In a dynamically type language when two Integer which are summed together would produce an overflow, the system is free to return, say, a Long. Because dynamically typed language rely on duck typing, it's fine. The same can not happen in a statically typed language; it would break the type system.
EDIT
Given that my answer and comment was not clear, here I try to provide more details why I think that static typing is the main issue:
1) the very fact that we speak of primitive type is a static typing issue; we wouldn't care in a dynamically type language.
2) with primitive types, the result of the overflow can not be converted to another type than an int because it would not be correct w.r.t static typing
int i = Integer.MAX_VALUE + 1; // -2147483648
3) with reference types, it's the same except that we have autoboxing. Still, the addition could not return, say, a BigInteger because it would not match the static type sytem (A BigInteger can not be casted to Integer).
Integer j = new Integer( Integer.MAX_VALUE ) + 1; // -2147483648
4) what could be done is to subclass, say, Number and implement at type UnboundedNumeric that optimizes the representation internally (representation independence).
UnboundedNum k = new UnboundedNum( Integer.MAX_VALUE ).add( 1 ); // 2147483648
Still, it's not really the answer to the original question.
5) with dynamic typing, something like
var d = new Integer( Integer.MAX_VALUE ) + 1; // 2147483648
would return a Long which is ok.