java: How does the memory of a class member allocate? - java

We have a big class with 68 int, 22 double members, and there are also 4 members as class.
e.g
Class A{
public int i1
public int i2
public int i3
....
public Order order1
public Order order2
...
public double..
}
1: Is the memory of i1,i2,i3 is continually physically?
2: For class A, does it store the pointer to order1 & order 2, or it stores the content of order 1 & order 2?
There is another class B which has a member as an array of A, there are 365 A. So the memory for B could be very large. My concern is if the size of B is too huge, we can get lots of cache level 2 missing and degrade the performance. We mainly will sum the value of i1, and sum the value of i2, and sum the value of i3 etc.
e.g if sum i1 for all 365 A, then the i1 for all these 365A will not sit continually in the memory. So we could hit some cache missing and get not good performance.
I am thinking of using class B but remove the class A, and move all the elements inside A to B, so we can get
Class B {
public array_of_i1
public array_of_i2
..
}
In this way, when I calculate the sum of i1 or i2, then all the i1 or i2 are sitting together, so maybe we could get performance improvement?
As the class is huge, I'd like to look for your opinions before the change.

It's generally consecutive but it depends on which JVM you are using.
One complication is that runtime in
memory structure of Java objects is
not enforced by the virtual machine
specification, which means that
virtual machine providers can
implement them as they please. The
consequence is that you can write a
class, and instances of that class in
one VM can occupy a different amount
of memory than instances of that same
class when run in another VM.
As for the specific layout,
In order to save some memory, the Sun
VM doesn't lay out object's attributes
in the same order they are declared.
Instead, the attributes are organized
in memory in the following order:
doubles and longs
ints and floats
shorts and chars
booleans and bytes
references
(from http://www.codeinstructions.com/2008/12/java-objects-memory-structure.html)
He also includes how inherited classes are handled.

The JLS doesn't strongly specify the exact sizes of objects, so this can vary between JVM implementations (though you can infer some lower bounds, i.e. an integer must be at least 32 bits).
In Sun's JVM however, integers take 32 bits, doubles take 64 bits and object references take 32 bits (unless you're running on a 64-bit JVM and pointer compression is disabled). Then the object itself has a 2 word header, and the overall memory size is aligned to a multiple of 8 bytes.
So overall this object should take 8 * ceil((8 + 68 * 4 + 22 * 8 + 4 * 4) / 8) = 10448 bytes, if I haven't forgotten to account for something (which is entirely possible), and if you're running on a 32-bit machine.
But - as stated above, you shouldn't really rely too strongly on this as it's not specified anywhere, and will vary between implementations and on different platforms. As always with performance-related metrics, the key is to write clean code, measure the impact (in this case use a profiler to look at memory usage, and execution time) and then optimise as required.
Performance only really matters from the macro perspective; worrying about L2 cache misses when designing your object model is really the wrong way round to do it.
(And a class with 94 fields is almost certainly not a clean design, so you're right to consider refactoring it...)

Firstly, before you embark on any work, have you profiled your application? Are cache misses causing a bottleneck?
What are your performance requirements? (Note: 'As fast as possible' isnt a requirement*)

That would be implementation dependent.
Yes, it stores pointers. The objects will reside elsewhere.

In general, yes. But I don't think you necessarily want to depend on it. Wrong language for that low-level type stuff.
Pointers, but I'm not sure why that matters.
Profile before making significant changes for performance reasons. I think the second is cleaner though. Wouldn't you rather do a simple array loop for your summing?
Or you could change the structure to use a smaller class, keeping the stuff that runs in a tight loop together will tend to improve cache hits (iff that is your performance bottleneck).

Related

Java int memory usage

While I was thinking over the memory usage of various types, I started to become a bit confused of how Java utilizes memory for integers when passed to a method.
Say, I had the following code:
public static void main (String[] args){
int i = 4;
addUp(i);
}
public static int addUp(int i){
if(i == 0) return 0;
else return addUp(i - 1);
}
In this following example, I am wondering if my following logic was correct:
I have made a memory initially for integer i = 4. Then I pass it to a method. However, since primitives are not pointed in Java, in the addUp(i == 4), I create another integer i = 4. Then afterwards, there is another addUp(i == 3), addUp(i == 2), addUp(i == 1), addUp(i == 0) in which each time, since the value is not pointed, a new i value is allocated in the memory.
Then for a single "int i" value, I have used 6 integer value memories.
However, if I were to always pass it through an array:
public static void main (String[] args){
int[] i = {4};
// int tempI = i[0];
addUp(i);
}
public static int addUp(int[] i){
if(i[0] == 0) return 0;
else return addUp(i[0] = i[0] - 1);
}
- Since I create an integer array of size 1 and then pass that to addUp which will again be passed for addUp(i[0] == 3), addUp(i[0] == 2), addUp(i[0] == 1), addUp(i[0] == 0), I have only had to use 1 integer array memory space and hence far more cost efficient. In addition, if I were to make a int value beforehand to store the initial value of i[0], I still have my "original" value.
Then this leads me to the question, why do people pass primitives like int in Java methods? Isn't it far more memory efficient to just pass the array values of those primitives? Or is the first example somehow still just O(1) memory?
And on top of this question, I just wonder the memory differences of using int[] and int especially for a size of 1. Thank you in advance. I was simply wondering being more memory efficient with Java and this came to my head.
Thanks for all the answers! I'm just now quickly wondering if I were to "analyze" big-oh memory of each code, would they both be considered O(1) or would that be wrong to assume?
What you are missing here: the int values in your example go on the stack, not on the heap.
And it is much less overhead to deal with fixed size primitive values existing on the stack - compared to objects on the heap!
In other words: using a "pointer" means that you have to create a new object on the heap. All objects live on the heap; there is no stack for arrays! And objects becomes subject to garbage collection immediately after you stopped using them. Stacks on the other hand come and go as you invoke methods!
Beyond that: keep in mind that the abstractions that programming languages provide to us are created to help us writing code that is easy to read, understand and maintain. Your approach is basically to do some sort of fine tuning that leads to more complicated code. And that is not how Java solves such problems.
Meaning: with Java, the real "performance magic" happens at runtime, when the just-in-time compiler kicks in! You see, the JIT can inline calls to small methods when the method is invoked "often enough". And then it becomes even more important to keep data "close" together. As in: when data lives on the heap, you might have to access memory to get a value. Whereas items living on the stack - might still be "close" (as in: in the processor cache). So your little idea to optimize memory usage could actually slow down program execution by orders of magnitude. Because even today, there are orders of magnitude between accessing the processor cache and reading main memory.
Long story short: avoid getting into such "micro-tuning" for either performance or memory usage: the JVM is optimized for the "normal, typical" use cases. Your attempts to introduce clever work-arounds can therefore easily result in "less good" results.
So - when you worry about performance: do what everybody else is doing. And if you one really care - then learn how the JVM works. As it turns out that even my knowledge is slightly outdated - as the comments imply that a JIT can inline objects on the stack. In that sense: focus on writing clean, elegant code that solves the problem in straight forward way!
Finally: this is subject to change at some point. There are ideas to introduce true value value objects to java. Which basically live on the stack, not the heap. But don't expect that to happen before Java10. Or 11. Or ... (I think this would be relevant here).
Several things:
First thing will be splitting hairs, but when you pass an int in java you are allocating 4 bytes onto the stack, and when you pass an array (because it is a reference) you are actually allocating 8 bytes (assuming an x64 architecture) onto the stack, plus the additional 4 bytes that store the int into the heap.
More importantly, the data that lives in the array is allocated into the heap, whereas the reference to the array itself is allocated onto the stack, when passing an integer there is no heap allocation required the primitive is only allocated into the stack. Over time reducing the heap allocations will mean that the garbage collector will have fewer things to clean up. Whereas the cleanup of stack-frames is trivial and doesn't require additional processing.
However, this is all moot (imho) because in practice when you have complicated collections of variables and objects you are likely going to end up grouping them together into a class. In general, you should be writing to promote readability and maintainability rather than trying to squeeze every last drop of performance out of the JVM. The JVM is pretty quick as it is, and there is always Moore's Law as a backstop.
It would be difficult to analyze the the Big-O for each because in order to get a true picture you would have to factor in the behavior of the garbage collector and that behavior is highly dependent on both the JVM itself and any runtime (JIT) optimizations that the JVM has made to your code.
Please remember Donald Knuth's wise words that "premature optimization is the root of all evil"
Write code that avoids micro-tuning, code that promotes readability and maintainability will fare better over the long run.
If your assumption is that arguments passed to functions necessarily consume memory (which is false by the way), then in your second example that passes an array, a copy of the reference to the array is made. That reference may actually be larger than an int, it's unlikely to be smaller.
Whether these methods take O(1) or O(N) depends on the compiler. (Here N is the value of i or i[0], depending.) If the compiler uses tail-recursion optimization then the stack space for the parameters, local variables, and return address can be reused and the implementation will then be O(1) for space. Absent tail-recursion optimization the space complexity is the same as the time complexity, O(N).
Basically tail-recursion optimization amounts (in this case) to the compiler rewriting your code as
public static int addUp(int i){
while(i != 0) i = i-1 ;
return 0;
}
or
public static int addUp(int[] i){
while(i[0] != 0) i[0] = i[0] - 1 ;
return 0 ;
}
A good optimizer might further optimize away the loops.
As far as I know, no Java compilers implement tail-recursion optimization at present, but there is no technical reason that it can't be done in many cases.
Actually, when you pass an array as a parameter to a method - a reference to this array is passed under the hood. The array itself is stored on the heap. And the reference can be 4 or 8 bytes in size (depending on CPU architecture, JVM implementation, etc.; even more, JLS doesn't say anything about how big a reference is in memory).
On the other hand, primitive int value always consumes only 4 bytes and resides on the stack.
When you pass an array, the content of the array may be modified by the method that receives the array. When you pass int primitives, those primitives may not be modified by the method that receives them. That's why sometimes you may use primitives and sometimes arrays.
Also in general, in Java programming you tend to favor readability and let this kind of memory optimizations be done by the JIT compiler.
The int array reference actually takes up more space in the stack frames than an int primitive (8 bytes vs 4). You're actually using more space.
But I think the primary reason people prefer the first way is because it's clearer and more legible.
People actually do do things a lot closer to the second when more ints are involved.

Understanding how to memcpy with TheUnsafe

I read stuff about TheUnsafe, but I get confused by the fact that, unlike C/C++ we have to work out the offset of stuff, and there's also the 32bits VM vs the 64bits VM, which may or may not have different pointers sizes depending on a particular VM setting being turned on or off (also, I'm assuming all offsets to data are actually based on pointer arithmetic this would influence them to).
Unfortunately, it seems all the stuff ever written about how to use TheUnsafe stems from one article only (the one who happened to be the first) and all the others copy pasted from it to a certain degree. Not many of them exist, and some are not clear because the author apparently did not speak English.
My question is:
How can I find the offset of a field + the pointer to the instance that owns that field (or field of a field, or field, of a field, of a field...) using TheUnsafe
How can I use it to perform a memcpy to another pointer + offset memory address
Considering the data may have several GB in size, and considering the heap offers no direct control over data alignment and it may most certainly be fragmented because:
1) I don't think there's nothing stoping the VM from allocating field1 at offset + 10 and field2 at offset sizeof(field1) + 32, is there?
2) I would also assume the GC would move big chunks of data around, leading to a field with 1GB in size being fragmented sometimes.
So is the memcpy operation as I described even possible?
If data is fragmented because of GC, of course the heap has a pointer to where the next chunk of data is, but using the simple process described above doesn't seem to cover that.
so must the data be off-heap for this to (maybe) work? If so, how to allocate off-heap data using TheUnsafe, making such data work as a field of an instance and of course freeing the allocated memory once done with it?
I encourage anyone who didn't quite understand the question to ask for any specifics they need to know.
I also urge people to refrain from answering if their whole idea is "put all objects you need to copy in an array and useSystem.arraycopy. I know it's common practice in this wonderful forum to, instead of answering what's been asked, offering a complete alternate solution that, in principle, has nothing to do with the original question apart from the fact that it gets the same job done.
Best regards.
First a big warning: “Unsafe must die” http://blog.takipi.com/still-unsafe-the-major-bug-in-java-6-that-turned-into-a-java-9-feature/
Some prerequisites
static class DataHolder {
int i1;
int i2;
int i3;
DataHolder d1;
DataHolder d2;
public DataHolder(int i1, int i2, int i3, DataHolder dh) {
this.i1 = i1;
this.i2 = i2;
this.i3 = i3;
this.d1 = dh;
this.d2 = this;
}
}
Field theUnsafe = Unsafe.class.getDeclaredField("theUnsafe");
theUnsafe.setAccessible(true);
Unsafe unsafe = (Unsafe) theUnsafe.get(null);
DataHolder dh1 = new DataHolder(11, 13, 17, null);
DataHolder dh2 = new DataHolder(23, 29, 31, dh1);
The basics
To get the offset of a field (i1), you can use the following code:
Field fi1 = DataHolder.class.getDeclaredField("i1");
long oi1 = unsafe.objectFieldOffset(fi1);
and the access the field value of instance dh1 you can write
System.out.println(unsafe.getInt(dh1, oi1)); // will print 11
You can use similar code to access an object reference (d1):
Field fd1 = DataHolder.class.getDeclaredField("d1");
long od1 = unsafe.objectFieldOffset(fd1);
and you can use it to get the reference to dh1 from dh2:
System.out.println(dh1 == unsafe.getObject(dh2, od1)); // will print true
Field ordering and alignment
To get the offsets of all declared fields of a object:
for (Field f: DataHolder.class.getDeclaredFields()) {
if (!Modifier.isStatic(f.getModifiers())) {
System.out.println(f.getName()+" "+unsafe.objectFieldOffset(f));
}
}
On my test it seems that the JVM reorders fields as it sees fit (i.e. adding a field can yield completely different offsets on the next run)
An Objects address in native memory
It's important to understand that the following code is going to crash your JVM sooner or later, because the Garbage Collector will move your objects at random times, without you having any control on when and why it happens.
Also it's important to understand that the following code depends on the JVM type (32 bits versus 64 bits) and on some start parameters for the JVM (namely, usage of compressed oops on 64 bit JVMs).
On a 32 bit VM a reference to an object has the same size as an int. So what do you get if you call int addr = unsafe.getInt(dh2, od1)); instead of unsafe.getObject(dh2, od1))? Could it be the native address of the object?
Let's try:
System.out.println(unsafe.getInt(null, unsafe.getInt(dh2, od1)+oi1));
will print out 11 as expected.
On a 64 bit VM without compressed oops (-XX:-UseCompressedOops), you will need to write
System.out.println(unsafe.getInt(null, unsafe.getLong(dh2, od1)+oi1));
On a 64 bit VM with compressed oops (-XX:+UseCompressedOops), things are a bit more complicated. This variant has 32 bit object references that are turned into 64 bit addresses by multiplying them with 8L:
System.out.println(unsafe.getInt(null, 8L*(0xffffffffL&(dh2, od1)+oi1));
What is the problem with these accesses
The problem is the Garbage Collector together with this code. The Garbage Collector can move around objects as it pleases. Since the JVM knows about it's object references (the local variables dh1 and dh2, the fields d1 and d2 of these objects) it can adjust these references accordingly, your code will never notice.
By extracting object references into int/long variables you turn these object references into primitive values that happen to have the same bit-pattern as an object reference, but the Garbage Collector does not know that these were object references (they could have been generated by a random generator as well) and therefore does not adjust these values while moving objects around. So as soon as a Garbage Collection cycle is triggered your extracted addresses are no longer valid, and trying to access memory at these addresses might crash your JVM immediately (the good case) or you might trash your memory without noticing on the spot (the bad case).

Helping the JVM with stack allocation by using separate objects

I have a bottleneck method which attempts to add points (as x-y pairs) to a HashSet. The common case is that the set already contains the point in which case nothing happens. Should I use a separate point for adding from the one I use for checking if the set already contains it? It seems this would allow the JVM to allocate the checking-point on stack. Thus in the common case, this will require no heap allocation.
Ex. I'm considering changing
HashSet<Point> set;
public void addPoint(int x, int y) {
if(set.add(new Point(x,y))) {
//Do some stuff
}
}
to
HashSet<Point> set;
public void addPoint(int x, int y){
if(!set.contains(new Point(x,y))) {
set.add(new Point(x,y));
//Do some stuff
}
}
Is there a profiler which will tell me whether objects are allocated on heap or stack?
EDIT: To clarify why I think the second might be faster, in the first case the object may or may not be added to the collection, so it's not non-escaping and cannot be optimized. In the second case, the first object allocated is clearly non-escaping so it can be optimized by the JVM and put on stack. The second allocation only occurs in the rare case where it's not already contained.
Marko Topolnik properly answered your question; the space allocated for the first new Point may or may not be immediately freed and it is probably foolish to bank on it happening. But I want to expand on why you're currently in a deep state of sin:
You're trying to optimise this the wrong way.
You've identified object creation to be the bottleneck here. I'm going to assume that you're right about this. You're hoping that, if you create fewer objects, the code will run faster. That might be true, but it will never run very fast as you've designed it.
Every object in Java has a pretty fat header (16 bytes; an 8-byte "mark word" full of bit fields and an 8-byte pointer to the class type) and, depending on what's happened in your program thus far, possibly another pretty fat trailer. Your HashSet isn't storing just the contents of your objects; it's storing pointers to those fat-headers-followed-by-contents. (Actually, it's storing pointers to Entry classes that themselves store pointers to Points. Two levels of indirection there.)
A HashSet lookup, then, figures out which bucket it needs to look at and then chases one pointer per thing in the bucket to do the comparison. (As one great big chain in series.) There probably aren't very many of these objects, but they almost certainly aren't stored close together, making your cache angry. Note that object allocation in Java is extremely cheap---you just increment a pointer---and that this is quite probably a bigger source of slowness.
Java doesn't provide any abstraction like C++'s templates, so the only real way to make this fast and still provide the Set abstraction is to copy HashSet's code, change all of the data structures to represent your objects inline, modify the methods to work with the new data structures, and, if you're still worried, make copies of the relevant methods that take a list of parameters corresponding to object contents (i.e. contains(int, int)) that do the right thing without constructing a new object.
This approach is error-prone and time-consuming, but it's necessary unfortunately often when working on Java projects where performance matters. Take a look at the Trove library Marko mentioned and see if you can use it instead; Trove did exactly this for the primitive types.
With that out of the way, a monomorphic call site is one where only one method is called. Hotspot aggressively inlines calls from monomorphic call sites. You'll notice that HashSet.contains punts to HashMap.containsKey. You'd better pray for HashMap.containsKey to be inlined since you need the hashCode call and equals calls inside to be monomorphic. You can verify that your code is being compiled nicely by using the -XX:+PrintAssembly option and poring over the output, but it's probably not---and even if it is, it's probably still slow because of what a HashSet is.
As soon as you have written new Point(x,y), you are creating a new object. It may happen not to be placed on the heap, but that's just a bet you can lose. For example, the contains call should be inlined for the escape analysis to work, or at least it should be a monomorphic call site. All this means that you are optimizing against a quite erratic performance model.
If you want to avoid allocation the solid way, you can use Trove library's TLongHashSet and have your (int,int) pairs encoded as single long values.

Are arrays of 'structs' theoretically possible in Java?

There are cases when one needs a memory efficient to store lots of objects. To do that in Java you are forced to use several primitive arrays (see below why) or a big byte array which produces a bit CPU overhead for converting.
Example: you have a class Point { float x; float y;}. Now you want to store N points in an array which would take at least N * 8 bytes for the floats and N * 4 bytes for the reference on a 32bit JVM. So at least 1/3 is garbage (not counting in the normal object overhead here). But if you would store this in two float arrays all would be fine.
My question: Why does Java not optimize the memory usage for arrays of references? I mean why not directly embed the object in the array like it is done in C++?
E.g. marking the class Point final should be sufficient for the JVM to see the maximum length of the data for the Point class. Or where would this be against the specification? Also this would save a lot of memory when handling large n-dimensional matrices etc
Update:
I would like to know wether the JVM could theoretically optimize it (e.g. behind the scene) and under which conditions - not wether I can force the JVM somehow. I think the second point of the conclusion is the reason it cannot be done easily if at all.
Conclusions what the JVM would need to know:
The class needs to be final to let the JVM guess the length of one array entry
The array needs to be read only. Of course you can change the values like Point p = arr[i]; p.setX(i) but you cannot write to the array via inlineArr[i] = new Point(). Or the JVM would have to introduce copy semantics which would be against the "Java way". See aroth's answer
How to initialize the array (calling default constructor or leaving the members intialized to their default values)
Java doesn't provide a way to do this because it's not a language-level choice to make. C, C++, and the like expose ways to do this because they are system-level programming languages where you are expected to know system-level features and make decisions based on the specific architecture that you are using.
In Java, you are targeting the JVM. The JVM doesn't specify whether or not this is permissible (I'm making an assumption that this is true; I haven't combed the JLS thoroughly to prove that I'm right here). The idea is that when you write Java code, you trust the JIT to make intelligent decisions. That is where the reference types could be folded into an array or the like. So the "Java way" here would be that you cannot specify if it happens or not, but if the JIT can make that optimization and improve performance it could and should.
I am not sure whether this optimization in particular is implemented, but I do know that similar ones are: for example, objects allocated with new are conceptually on the "heap", but if the JVM notices (through a technique called escape analysis) that the object is method-local it can allocate the fields of the object on the stack or even directly in CPU registers, removing the "heap allocation" overhead entirely with no language change.
Update for updated question
If the question is "can this be done at all", I think the answer is yes. There are a few corner cases (such as null pointers) but you should be able to work around them. For null references, the JVM could convince itself that there will never be null elements, or keep a bit vector as mentioned previously. Both of these techniques would likely be predicated on escape analysis showing that the array reference never leaves the method, as I can see the bookkeeping becoming tricky if you try to e.g. store it in an object field.
The scenario you describe might save on memory (though in practice I'm not sure it would even do that), but it probably would add a fair bit of computational overhead when actually placing an object into an array. Consider that when you do new Point() the object you create is dynamically allocated on the heap. So if you allocate 100 Point instances by calling new Point() there is no guarantee that their locations will be contiguous in memory (and in fact they will most likely not be allocated to a contiguous block of memory).
So how would a Point instance actually make it into the "compressed" array? It seems to me that Java would have to explicitly copy every field in Point into the contiguous block of memory that was allocated for the array. That could become costly for object types that have many fields. Not only that, but the original Point instance is still taking up space on the heap, as well as inside of the array. So unless it gets immediately garbage-collected (I suppose any references could be rewritten to point at the copy that was placed in the array, thereby theoretically allowing immediate garbage-collection of the original instance) you're actually using more storage than you would be if you had just stored the reference in the array.
Moreover, what if you have multiple "compressed" arrays and a mutable object type? Inserting an object into an array necessarily copies that object's fields into the array. So if you do something like:
Point p = new Point(0, 0);
Point[] compressedA = {p}; //assuming 'p' is "optimally" stored as {0,0}
Point[] compressedB = {p}; //assuming 'p' is "optimally" stored as {0,0}
compressedA[0].setX(5)
compressedB[0].setX(1)
System.out.println(p.x);
System.out.println(compressedA[0].x);
System.out.println(compressedB[0].x);
...you would get:
0
5
1
...even though logically there should only be a single instance of Point. Storing references avoids this kind of problem, and also means that in any case where a nontrivial object is being shared between multiple arrays your total storage usage is probably lower than it would be if each array stored a copy of all of that object's fields.
Isn't this tantamount to providing trivial classes such as the following?
class Fixed {
float hiddenArr[];
Point pointArray(int position) {
return new Point(hiddenArr[position*2], hiddenArr[position*2+1]);
}
}
Also, it's possible to implement this without making the programmer explicitly state that they'd like it; the JVM is already aware of "value types" (POD types in C++); ones with only other plain-old-data types inside them. I believe HotSpot uses this information during stack elision, no reason it couldn't do it for arrays too?

How to calculate the size of class in java ..?is any method like sizeof() in c..? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Is there any sizeof-like method in Java?
I want to determine the size of my class at runtime. In C i do this:
struct
{
int id;
char name[30];
} registry
void main()
{
int size = sizeof(registry);
}
How i can do this in Java ?
You can't. The Java virtual machine doesn't want you to know it.
There are probably other ways to do what you really wanted to do with this information.
VMs differ in how they store variables, etc internally. Most modern 32-bit VMs are similar though. You can probably estimate the shallow size of a class instance like this:
sizeInBytes = C + 4 * (fieldCount)
Where C is some constant.
This is because typically all fields are given a word width, which is often still 4 bytes internally to the JVM. The deep size of the class is more difficult to compute, but basically you recursively add the size of each referent object. Arrays are typically 1*arr.length bytes in size for booleans and bytes, 2*arr.length for chars and shorts, 4*arr.length for ints and floats, and 8*arr.length for doubles and longs.
Probably the easiest way to get an estimate at runtime is to measure how Runtime.freeMemory() changes as you instantiate objects of your class. None of this should be used for program logic of course; just for JVM tweaking or curiousity.
Have a look at ClassMexer which is just a simple implementation of the new java.lang.instrument.Instrumentation interface. Just follow the instructions, which are pretty simple.
It may be too intrusive for your use as it runs as javaagent but from your example it should be ok.
You can get an estimate, getting the exact number if difficult. Why is it necessary?

Categories