Java Object[] cast to long[][] performance issuse? - java

Hello I am curious are there any serious performance consequences from performing the following type of cast? Especially when it's performance millions of times thanks
Object [] car = new Object[1];
car[0] = new Long[2][2];
long[][] values = (long[][]) car[0];

Casting really isn't much of a performance issue. It's just meta data for the compiler to know how to treat the object. Of course, if it tries to use a object as the wrong type, you'll get a ClassCastException, which can cause performance issues.

I'd say the performance issue is that that case makes absolutely no sense whatsoever. Why would you ever do that?

The code you have written will cause a class cast exception. Throwing millions of class cast exceptions will probably be slow. It is also unlikely to be the intended logic.

The impact of a cast depends on your jvm and the code in which it is used. The cast operation should be highly optimized in modern jvms since it is used quite often for generic classes so its overhead should be quite low.
If your code has to be fast you should profile it to find bottlenecks, this will also answer if the cast is a problem in your specific case.

You can't do the following cast. I suggest you make the array a long[][] from the start and you won't need to convert the array type.
// these won't work as these are different types of references.
long[] longs = (long[]) new Long[0];
long[][] longs = (long[][]) new Long[0][];
The only way to convert them is to iterate over them.
Long[][] longs = ...
long[][] longs2 = new long[longs.length][];
for(int i=0;i<longs.length;i++) {
longs2[i] = new long[longs[i].length];
for(int j=0;j<longs[i].length;j++) longs2[i][j] = longs[i][j];
}
The main performance cost is in creating the Long[][] in the first place as you have to create many objects whereas with long[][] you have only the arrays making the structure much more efficient for memory and cpu.

if hotspot can prove that Object is actually long[][] the cost is none, otherwise it costs a check + branch prediction, which usually is successful and it is an extra (only one) cpu cycle.
The casting is not exactly free like one of the answer suggests and it's not only metadata.

Related

Does introducing an intermediate list might cause performance overhead?

List<UserData> dataList = new ArrayList<>();
List<UserData> dataList1 = dataRepository.findAllByProcessType(ProcessType.OUT);
List<UserData> dataList2 = dataRepository.findAllByProcessType(ProcessType.CORPORATE_OUT);
dataList.addAll(dataList1);
dataList.addAll(dataList2 );
return dataList ;
vs
List<UserData> dataList = new ArrayList<>();
dataList.addAll(dataRepository.findAllByProcessType(ProcessType.OUT));
dataList.addAll(dataRepository.findAllByProcessType(ProcessType.CORPORATE_OUT));
return dataList ;
does the first implementation will cause any performance overhead? (i.e. more garbage / memory allocation than the second one)
P.S. - Yes, it can be optimised using one round trip to db as mentionted by #Tim. But that's not the answer i am looking for.I am in general want to know whether this type of implementation will cause overhead or not. Because this type of implementation helps debugging.
I'm going to say no, on the basis that I would be very surprised if the two code blocks produce different bytecode.
The first code does not "introduce an intermediate list". All it does is create new variables to reference lists that were created by the dataRepository call. I would expect the compiler to simply optimise those variables out.
Those lists are also created in the second code example, so there's no real difference.
Knowing that the compiler performs these sorts of optimisations frees us as programmers to write code that is well laid-out, clear, and maintainable, whilst still remaining confident that it will perform well.
The other consideration is debugging. In the first code block, it is easy to set breakpoints on the variable declaration lines, and inspect the values of the variables. Those simple operations become a pain when code is implemented in the second code block.
As the addAll() method should just be referencing the same data, both of your versions should perform about the same. But, the best thing to do here is to avoid the two unnecessary roundtrips to your database, and just use a single query:
List<ProcessType> types = Arrays.asList(ProcessType.OUT, ProcessType.CORPORATE_OUT);
List<UserData> dataList = findAllByProcessTypeIn(types);

Java int memory usage

While I was thinking over the memory usage of various types, I started to become a bit confused of how Java utilizes memory for integers when passed to a method.
Say, I had the following code:
public static void main (String[] args){
int i = 4;
addUp(i);
}
public static int addUp(int i){
if(i == 0) return 0;
else return addUp(i - 1);
}
In this following example, I am wondering if my following logic was correct:
I have made a memory initially for integer i = 4. Then I pass it to a method. However, since primitives are not pointed in Java, in the addUp(i == 4), I create another integer i = 4. Then afterwards, there is another addUp(i == 3), addUp(i == 2), addUp(i == 1), addUp(i == 0) in which each time, since the value is not pointed, a new i value is allocated in the memory.
Then for a single "int i" value, I have used 6 integer value memories.
However, if I were to always pass it through an array:
public static void main (String[] args){
int[] i = {4};
// int tempI = i[0];
addUp(i);
}
public static int addUp(int[] i){
if(i[0] == 0) return 0;
else return addUp(i[0] = i[0] - 1);
}
- Since I create an integer array of size 1 and then pass that to addUp which will again be passed for addUp(i[0] == 3), addUp(i[0] == 2), addUp(i[0] == 1), addUp(i[0] == 0), I have only had to use 1 integer array memory space and hence far more cost efficient. In addition, if I were to make a int value beforehand to store the initial value of i[0], I still have my "original" value.
Then this leads me to the question, why do people pass primitives like int in Java methods? Isn't it far more memory efficient to just pass the array values of those primitives? Or is the first example somehow still just O(1) memory?
And on top of this question, I just wonder the memory differences of using int[] and int especially for a size of 1. Thank you in advance. I was simply wondering being more memory efficient with Java and this came to my head.
Thanks for all the answers! I'm just now quickly wondering if I were to "analyze" big-oh memory of each code, would they both be considered O(1) or would that be wrong to assume?
What you are missing here: the int values in your example go on the stack, not on the heap.
And it is much less overhead to deal with fixed size primitive values existing on the stack - compared to objects on the heap!
In other words: using a "pointer" means that you have to create a new object on the heap. All objects live on the heap; there is no stack for arrays! And objects becomes subject to garbage collection immediately after you stopped using them. Stacks on the other hand come and go as you invoke methods!
Beyond that: keep in mind that the abstractions that programming languages provide to us are created to help us writing code that is easy to read, understand and maintain. Your approach is basically to do some sort of fine tuning that leads to more complicated code. And that is not how Java solves such problems.
Meaning: with Java, the real "performance magic" happens at runtime, when the just-in-time compiler kicks in! You see, the JIT can inline calls to small methods when the method is invoked "often enough". And then it becomes even more important to keep data "close" together. As in: when data lives on the heap, you might have to access memory to get a value. Whereas items living on the stack - might still be "close" (as in: in the processor cache). So your little idea to optimize memory usage could actually slow down program execution by orders of magnitude. Because even today, there are orders of magnitude between accessing the processor cache and reading main memory.
Long story short: avoid getting into such "micro-tuning" for either performance or memory usage: the JVM is optimized for the "normal, typical" use cases. Your attempts to introduce clever work-arounds can therefore easily result in "less good" results.
So - when you worry about performance: do what everybody else is doing. And if you one really care - then learn how the JVM works. As it turns out that even my knowledge is slightly outdated - as the comments imply that a JIT can inline objects on the stack. In that sense: focus on writing clean, elegant code that solves the problem in straight forward way!
Finally: this is subject to change at some point. There are ideas to introduce true value value objects to java. Which basically live on the stack, not the heap. But don't expect that to happen before Java10. Or 11. Or ... (I think this would be relevant here).
Several things:
First thing will be splitting hairs, but when you pass an int in java you are allocating 4 bytes onto the stack, and when you pass an array (because it is a reference) you are actually allocating 8 bytes (assuming an x64 architecture) onto the stack, plus the additional 4 bytes that store the int into the heap.
More importantly, the data that lives in the array is allocated into the heap, whereas the reference to the array itself is allocated onto the stack, when passing an integer there is no heap allocation required the primitive is only allocated into the stack. Over time reducing the heap allocations will mean that the garbage collector will have fewer things to clean up. Whereas the cleanup of stack-frames is trivial and doesn't require additional processing.
However, this is all moot (imho) because in practice when you have complicated collections of variables and objects you are likely going to end up grouping them together into a class. In general, you should be writing to promote readability and maintainability rather than trying to squeeze every last drop of performance out of the JVM. The JVM is pretty quick as it is, and there is always Moore's Law as a backstop.
It would be difficult to analyze the the Big-O for each because in order to get a true picture you would have to factor in the behavior of the garbage collector and that behavior is highly dependent on both the JVM itself and any runtime (JIT) optimizations that the JVM has made to your code.
Please remember Donald Knuth's wise words that "premature optimization is the root of all evil"
Write code that avoids micro-tuning, code that promotes readability and maintainability will fare better over the long run.
If your assumption is that arguments passed to functions necessarily consume memory (which is false by the way), then in your second example that passes an array, a copy of the reference to the array is made. That reference may actually be larger than an int, it's unlikely to be smaller.
Whether these methods take O(1) or O(N) depends on the compiler. (Here N is the value of i or i[0], depending.) If the compiler uses tail-recursion optimization then the stack space for the parameters, local variables, and return address can be reused and the implementation will then be O(1) for space. Absent tail-recursion optimization the space complexity is the same as the time complexity, O(N).
Basically tail-recursion optimization amounts (in this case) to the compiler rewriting your code as
public static int addUp(int i){
while(i != 0) i = i-1 ;
return 0;
}
or
public static int addUp(int[] i){
while(i[0] != 0) i[0] = i[0] - 1 ;
return 0 ;
}
A good optimizer might further optimize away the loops.
As far as I know, no Java compilers implement tail-recursion optimization at present, but there is no technical reason that it can't be done in many cases.
Actually, when you pass an array as a parameter to a method - a reference to this array is passed under the hood. The array itself is stored on the heap. And the reference can be 4 or 8 bytes in size (depending on CPU architecture, JVM implementation, etc.; even more, JLS doesn't say anything about how big a reference is in memory).
On the other hand, primitive int value always consumes only 4 bytes and resides on the stack.
When you pass an array, the content of the array may be modified by the method that receives the array. When you pass int primitives, those primitives may not be modified by the method that receives them. That's why sometimes you may use primitives and sometimes arrays.
Also in general, in Java programming you tend to favor readability and let this kind of memory optimizations be done by the JIT compiler.
The int array reference actually takes up more space in the stack frames than an int primitive (8 bytes vs 4). You're actually using more space.
But I think the primary reason people prefer the first way is because it's clearer and more legible.
People actually do do things a lot closer to the second when more ints are involved.

Java re-initializing java object with new , performance and memory

I want to know the drawback of writing below code using new for reinitializing object every time to create different object value.
List <Value> valueList = new ArrayList<>;
Value value = new Value();
value.setData("1");
valueList.add(value);
value = new value();
value.setData("2");
valueList.add(value);
value = new value();
value.setData("3");
valueList.add(value);
or a method could be added to return a value object similar to:
private Value getData(String input){
Value value = new Value();
value.setData(input);
return value;
}
List <Value> valueList = new ArrayList<>;
valueList.add(getData("1"));
valueList.add(getData("2"));
valueList.add(getData("3"));
Code wise the second approach looks better for me.
Please suggest the best approaches based on memory and performance.
Both options are creating 3 objects and adding them to a list. There is no difference for memory. Performance doesn't matter. If this code is executed often enough to "matter", the JIT will inline those method calls anyway. If the JIT decides: not important enough to inline, then we are talking about nanosecods anyway.
Thus: focus on writing clean code that gets the job done in a straight forward way.
From that perspective, I would suggest that you rather have a constructor that takes that data; and then you can write:
ValueList<Value> values = Arrays.asList(new Value("1"), new Value("2"), new Value("3"));
Long story short: performance is a luxury problem. Meaning: you only worry about performance when your tests/customers complain about "things taking too long".
Before that, you worry about creating a good, sound OO design and writing down a correct implementation. It is much easier to fix a certain performance problem within well built application - compared to getting "quality" into a code base that was driven by thoughts like those that we find in your questions.
Please note: that of course implies that you are aware of typical "performance pitfalls" which should be avoided. So: an experienced Java programmer knows how to implement things in an efficient way.
But you as a newbie: you only focus on writing correct, human readable programs. Keep in mind that your CPU does billions of cycles per second - thus performance is simply OK by default. You only have to worry when you are doing things on very large scale.
Finally: option 2 is in fact "better" - because it reduces the amount of code duplication.
In both cases, you create 3 instances of Value that are stored in a List.
It doesn't have sensitive differences in terms of consumed memory.
The last one produces nevertheless a cleaner code : you don't reuse a same variable and the variable has a limited scope.
You have indeed a factory method that does the job and returns the object for you.
So client code has just to "consume" it.
An alternative is a method with a varargs parameter :
private List<Value> getData(String... input){
// check not null before
List<Value> values = new ArrayList<>();
for (String s : input){
Value value = new Value();
value.setData(input);
}
return values;
}
List<Value> values = getData("1","2","3");
There is no difference in memory footprint, and there's little difference in performance, because method invocations are very inexpensive.
The second form of your code is a better-looking version of the first form of your code, with less code repetition. Other than that, the two are equivalent.
You can shorten your code by using streams:
List<Value> = Arrays.asList("1", "2", "3").stream()
.map(Value::new)
.collect(Collectors.toList());
Every time you calling new operator to create an object, it allocates spaces for this object on heap. It doesn't matter if you do it with 1st approach or 2nd approach, this objects are allocated to heap space the same way.
What you need to understand thou is a life-cycle of each object you creating and terms like Dependency, Aggregation, Association and Full Composition.

What is the deference between primitive data type and wrapper classes? Is the use of primitive data type in java violation of object orientated rules? [duplicate]

Since Java 5, we've had boxing/unboxing of primitive types so that int is wrapped to be java.lang.Integer, and so and and so forth.
I see a lot of new Java projects lately (that definitely require a JRE of at least version 5, if not 6) that are using int rather than java.lang.Integer, though it's much more convenient to use the latter, as it has a few helper methods for converting to long values et al.
Why do some still use primitive types in Java? Is there any tangible benefit?
In Joshua Bloch's Effective Java, Item 5: "Avoid creating unnecessary objects", he posts the following code example:
public static void main(String[] args) {
Long sum = 0L; // uses Long, not long
for (long i = 0; i <= Integer.MAX_VALUE; i++) {
sum += i;
}
System.out.println(sum);
}
and it takes 43 seconds to run. Taking the Long into the primitive brings it down to 6.8 seconds... If that's any indication why we use primitives.
The lack of native value equality is also a concern (.equals() is fairly verbose compared to ==)
for biziclop:
class Biziclop {
public static void main(String[] args) {
System.out.println(new Integer(5) == new Integer(5));
System.out.println(new Integer(500) == new Integer(500));
System.out.println(Integer.valueOf(5) == Integer.valueOf(5));
System.out.println(Integer.valueOf(500) == Integer.valueOf(500));
}
}
Results in:
false
false
true
false
EDIT Why does (3) return true and (4) return false?
Because they are two different objects. The 256 integers closest to zero [-128; 127] are cached by the JVM, so they return the same object for those. Beyond that range, though, they aren't cached, so a new object is created. To make things more complicated, the JLS demands that at least 256 flyweights be cached. JVM implementers may add more if they desire, meaning this could run on a system where the nearest 1024 are cached and all of them return true... #awkward
Autounboxing can lead to hard to spot NPEs
Integer in = null;
...
...
int i = in; // NPE at runtime
In most situations the null assignment to in is a lot less obvious than above.
Boxed types have poorer performance and require more memory.
Primitive types:
int x = 1000;
int y = 1000;
Now evaluate:
x == y
It's true. Hardly surprising. Now try the boxed types:
Integer x = 1000;
Integer y = 1000;
Now evaluate:
x == y
It's false. Probably. Depends on the runtime. Is that reason enough?
Besides performance and memory issues, I'd like to come up with another issue: The List interface would be broken without int.
The problem is the overloaded remove() method (remove(int) vs. remove(Object)). remove(Integer) would always resolve to calling the latter, so you could not remove an element by index.
On the other hand, there is a pitfall when trying to add and remove an int:
final int i = 42;
final List<Integer> list = new ArrayList<Integer>();
list.add(i); // add(Object)
list.remove(i); // remove(int) - Ouch!
Can you really imagine a
for (int i=0; i<10000; i++) {
do something
}
loop with java.lang.Integer instead? A java.lang.Integer is immutable, so each increment round the loop would create a new java object on the heap, rather than just increment the int on the stack with a single JVM instruction. The performance would be diabolical.
I would really disagree that it's much mode convenient to use java.lang.Integer than int. On the contrary. Autoboxing means that you can use int where you would otherwise be forced to use Integer, and the java compiler takes care of inserting the code to create the new Integer object for you. Autoboxing is all about allowing you to use an int where an Integer is expected, with the compiler inserting the relevant object construction. It in no way removes or reduces the need for the int in the first place. With autoboxing you get the best of both worlds. You get an Integer created for you automatically when you need a heap based java object, and you get the speed and efficiency of an int when you are just doing arithmetic and local calculations.
Primitive types are much faster:
int i;
i++;
Integer (all Numbers and also a String) is an immutable type: once created it can not be changed. If i was Integer, than i++ would create a new Integer object - much more expensive in terms of memory and processor.
First and foremost, habit. If you've coded in Java for eight years, you accumulate a considerable amount of inertia. Why change if there is no compelling reason to do so? It's not as if using boxed primitives comes with any extra advantages.
The other reason is to assert that null is not a valid option. It would be pointless and misleading to declare the sum of two numbers or a loop variable as Integer.
There's the performance aspect of it too, while the performance difference isn't critical in many cases (though when it is, it's pretty bad), nobody likes to write code that could be written just as easily in a faster way we're already used to.
By the way, Smalltalk has only objects (no primitives), and yet they had optimized their small integers (using not all 32 bits, only 27 or such) to not allocate any heap space, but simply use a special bit pattern. Also other common objects (true, false, null) had special bit patterns here.
So, at least on 64-bit JVMs (with a 64 bit pointer namespace) it should be possible to not have any objects of Integer, Character, Byte, Short, Boolean, Float (and small Long) at all (apart from these created by explicit new ...()), only special bit patterns, which could be manipulated by the normal operators quite efficiently.
I can't believe no one has mentioned what I think is the most important reason:
"int" is so, so much easier to type than "Integer". I think people underestimate the importance of a concise syntax. Performance isn't really a reason to avoid them because most of the time when one is using numbers is in loop indexes, and incrementing and comparing those costs nothing in any non-trivial loop (whether you're using int or Integer).
The other given reason was that you can get NPEs but that's extremely easy to avoid with boxed types (and it is guaranteed to be avoided as long as you always initialize them to non-null values).
The other reason was that (new Long(1000))==(new Long(1000)) is false, but that's just another way of saying that ".equals" has no syntactic support for boxed types (unlike the operators <, >, =, etc), so we come back to the "simpler syntax" reason.
I think Steve Yegge's non-primitive loop example illustrates my point very well:
http://sites.google.com/site/steveyegge2/language-trickery-and-ejb
Think about this: how often do you use function types in languages that have good syntax for them (like any functional language, python, ruby, and even C) compared to java where you have to simulate them using interfaces such as Runnable and Callable and nameless classes.
Couple of reasons not to get rid of primitives:
Backwards compatability.
If it's eliminated, any old programs wouldn't even run.
JVM rewrite.
The entire JVM would have to be rewritten to support this new thing.
Larger memory footprint.
You'd need to store the value and the reference, which uses more memory. If you have a huge array of bytes, using byte's is significantly smaller than using Byte's.
Null pointer issues.
Declaring int i then doing stuff with i would result in no issues, but declaring Integer i and then doing the same would result in an NPE.
Equality issues.
Consider this code:
Integer i1 = 5;
Integer i2 = 5;
i1 == i2; // Currently would be false.
Would be false. Operators would have to be overloaded, and that would result in a major rewrite of stuff.
Slow
Object wrappers are significantly slower than their primitive counterparts.
Objects are much more heavyweight than primitive types, so primitive types are much more efficient than instances of wrapper classes.
Primitive types are very simple: for example an int is 32 bits and takes up exactly 32 bits in memory, and can be manipulated directly. An Integer object is a complete object, which (like any object) has to be stored on the heap, and can only be accessed via a reference (pointer) to it. It most likely also takes up more than 32 bits (4 bytes) of memory.
That said, the fact that Java has a distinction between primitive and non-primitive types is also a sign of age of the Java programming language. Newer programming languages don't have this distinction; the compiler of such a language is smart enough to figure out by itself if you're using simple values or more complex objects.
For example, in Scala there are no primitive types; there is a class Int for integers, and an Int is a real object (that you can methods on etc.). When the compiler compiles your code, it uses primitive ints behind the scenes, so using an Int is just as efficient as using a primitive int in Java.
In addition to what others have said, primitive local variables are not allocated from the heap, but instead on the stack. But objects are allocated from the heap and thus have to be garbage collected.
It's hard to know what kind of optimizations are going on under the covers.
For local use, when the compiler has enough information to make optimizations excluding the possibility of the null value, I expect the performance to be the same or similar.
However, arrays of primitives are apparently very different from collections of boxed primitives. This makes sense given that very few optimizations are possible deep within a collection.
Furthermore, Integer has a much higher logical overhead as compared with int: now you have to worry about about whether or not int a = b + c; throws an exception.
I'd use the primitives as much as possible and rely on the factory methods and autoboxing to give me the more semantically powerful boxed types when they are needed.
int loops = 100000000;
long start = System.currentTimeMillis();
for (Long l = new Long(0); l<loops;l++) {
//System.out.println("Long: "+l);
}
System.out.println("Milliseconds taken to loop '"+loops+"' times around Long: "+ (System.currentTimeMillis()- start));
start = System.currentTimeMillis();
for (long l = 0; l<loops;l++) {
//System.out.println("long: "+l);
}
System.out.println("Milliseconds taken to loop '"+loops+"' times around long: "+ (System.currentTimeMillis()- start));
Milliseconds taken to loop '100000000' times around Long: 468
Milliseconds taken to loop '100000000' times around long: 31
On a side note, I wouldn't mind seeing something like this find it's way into Java.
Integer loop1 = new Integer(0);
for (loop1.lessThan(1000)) {
...
}
Where the for loop automatically increments loop1 from 0 to 1000
or
Integer loop1 = new Integer(1000);
for (loop1.greaterThan(0)) {
...
}
Where the for loop automatically decrements loop1 1000 to 0.
Primitive types have many advantages:
Simpler code to write
Performance is better since you are not instantiating an object for the variable
Since they do not represent a reference to an object there is no need to check for nulls
Use primitive types unless you need to take advantage of the boxing features.
You need primitives for doing mathematical operations
Primitives takes less memory as answered above and better performing
You should ask why Class/Object type is required
Reason for having Object type is to make our life easier when we deal with Collections. Primitives cannot be added directly to List/Map rather you need to write a wrapper class. Readymade Integer kind of Classes helps you here plus it has many utility methods like Integer.pareseInt(str)
I agree with previous answers, using primitives wrapper objects can be expensive.
But, if performance is not critical in your application, you avoid overflows when using objects. For example:
long bigNumber = Integer.MAX_VALUE + 2;
The value of bigNumber is -2147483647, and you would expect it to be 2147483649. It's a bug in the code that would be fixed by doing:
long bigNumber = Integer.MAX_VALUE + 2l; // note that '2' is a long now (it is '2L').
And bigNumber would be 2147483649. These kind of bugs sometimes are easy to be missed and can lead to unknown behavior or vulnerabilities (see CWE-190).
If you use wrapper objects, the equivalent code won't compile.
Long bigNumber = Integer.MAX_VALUE + 2; // Not compiling
So it's easier to stop these kind of issues by using primitives wrapper objects.
Your question is so answered already, that I reply just to add a little bit more information not mentioned before.
Because JAVA performs all mathematical operations in primitive types. Consider this example:
public static int sumEven(List<Integer> li) {
int sum = 0;
for (Integer i: li)
if (i % 2 == 0)
sum += i;
return sum;
}
Here, reminder and unary plus operations can not be applied on Integer(Reference) type, compiler performs unboxing and do the operations.
So, make sure how many autoboxing and unboxing operations happen in java program. Since, It takes time to perform this operations.
Generally, it is better to keep arguments of type Reference and result of primitive type.
The primitive types are much faster and require much less memory. Therefore, we might want to prefer using them.
On the other hand, current Java language specification doesn’t allow usage of primitive types in the parameterized types (generics), in the Java collections or the Reflection API.
When our application needs collections with a big number of elements, we should consider using arrays with as more “economical” type as possible.
*For detailed info see the source: https://www.baeldung.com/java-primitives-vs-objects
To be brief: primitive types are faster and require less memory than boxed ones

Is Java slow when creating Objects?

In my current project (OpenGL Voxel Engine) I have a serious issue when generating models. I have a very object oriented structure, meaning that even single parameters of my vertices are Objects. This way I am creating about 75000 Objects for 750 voxels in about 5 seconds. Is Java really this slow when allocating new Objects or do I miss a big failure somewhere in my code?
Very big question. Generally speaking, it depends from the object class definition and by the amount of work required to construct object.
Some issue:
avoid finalize method,
tune memory and GC in order to avoid excessive GC activity,
avoid big work during constructor,
do not use syncronization call during object construction,
use Weak references
these issues solved my problem.
See also http://oreilly.com/catalog/javapt/chapter/ch04.html
Finally let me suggest you the (deprecated) Object Pool pattern or reuse objects.
Concluding, no, generally speaking, java object creation is not slow
Of course it isn't. The following code allocates 10 million objects and stores them in an array. On my 5 year old notebook, it completes in 1.4 seconds.
public class Test {
public static void main(String[] args) {
Object[] o = new Object[10_000_000];
long start = System.nanoTime();
for (int i = 0; i < o.length; i++) {
o[i] = new Object();
}
long end = System.nanoTime();
System.out.println(Arrays.hashCode(o));
System.out.println(new BigDecimal(end - start).movePointLeft(9));
}
}
... and that's even though this benchmark is quite naive in that it doesn't trigger just in time compilation of the code under test before starting the timer.
Simply creating 75,000 objects should not take 5 seconds. Take a look at the work your constructor is doing. What else are you doing during this time besides creating the objects? Have you tried timing the code to pinpoint where delays occur?
Objects will be slower than primitives, and they will also consume considerably more memory - so it's possible you are going overboard on them. It's hard to say without seeing more details.
75000 objects will not take a long time to create though, try this:
List<Integer> numbers = new ArrayList<Integer>();
for(int i = 0;i<75000;i++){
numbers.add(i); // Note this will autobox creating an Integer object after the first 128
}
System.out.println(numbers.size());
}
http://www.tryjava8.com/app/snippets/52d070b1e4b004716da5cb4f
Total time taken less than a second.
When I put the number up to 7,500,000 it finally took a second...
The new operator in java is very fast compared to the common approach in languages without automatic memory management (e.g. the new operator is usually faster than the malloc-command in C because it does not need a system call).
Although the new operator can still be a bottleneck, it is certainly not the problem in your case. Creating 75K objects should be WAY faster than 5 seconds.
I have the same issue with creating new objects.
My object in constructor allocate single three dimensional array 64x64x64 and no more. FPS fell down to quarter of a value.
I solve this issue with reusing old object and reset it's state (BTW this method reallocate this array without lost performance).
If I move allocation array into separate method and call it after creating the object, speed does not increase to acceptable value.
This object I created is in Main game loop.

Categories