Consider the following JNA structure:
public class VkDeviceQueueCreateInfo extends VulkanStructure {
public VkStructureType sType = VkStructureType.VK_STRUCTURE_TYPE_DEVICE_QUEUE_CREATE_INFO;
public Pointer pNext;
public int flags;
public int queueFamilyIndex;
public int queueCount;
public Pointer pQueuePriorities;
}
The sType field is a constant value used by the native layer to identify the type of structure. This works fine for a single instance of this class created using new.
However if we allocate an array of this structure using toArray() the sType is reset to the default value after the constructor has been invoked for each array element. i.e. it clears the field!
Alternatives that didn't work:
Setting the field explicitly in the constructor has no effect (it still gets reset).
Ditto making the field final.
Doesn't seem to matter whether we try the default constructor or one with a JNA Pointer argument.
The only thing that seems to work is to turn off auto-read for the structure:
public class VkDeviceQueueCreateInfo extends VulkanStructure {
...
public VkDeviceQueueCreateInfo() {
setAutoRead(false);
}
}
(Note that this structure is only used to write to the native layer, not read anything back).
This works but what is it actually doing? Why is JNA resetting the structure to the native values when there aren't any yet? Is there a way to switch off the auto-read for this field globally?
This is not a big deal for a single structure, but in this project there are several hundred that were code generated from the native layer, most of which (but not all) with the sType pre-populated as the above example. Clearly pre-populating the field was not was the way to go, but what is the alternative? Will every structure need to be re-generated with the above fiddle?
EDIT: Another related question that comes to mind after brooding on this - what about array types in a structure? Are they reset to null by the auto-read thingy? The code-generated structures initialise any arrays to size the structure, e.g. public float[] colour = new float[4];
You are on the right track pointing out that the auto-read() is part of the problem here. When you invoke toArray() you are (usually) changing the memory backing for the array to a new native memory allocation (the auto-allocation zeroes out the memory). So all those 0's are loaded into your array.
The internal Structure toArray() keeps the values for the first element for your convenience, but does nothing for the remainder, which are instantiated using newInstance() inside the loop. Here are the two lines causing your problem:
array[i] = newInstance(getClass(), memory.share(i*size, size));
array[i].conditionalAutoRead();
My recommendation would be for you to override toArray() in your own VulkanStructure structure that the others inherit from. You could copy over the existing code and modify it as you see fit (e.g., remove the autoRead).
Or you could overload a toArray() that gets passed a collection of Structures and copies over the backing memory from the old collection before reading it to the new one. Alternately, if the original memory backing is large enough when toArray() is called, the memory isn't cleared. So you could allocate your own large enough memory, use useMemory() on the first element to change its backing, and copy over the backing memory bytes; and they would be auto-read into the new array version.
Related
In my program I have a following class:
private static class Node
{
// True if '1', false otherwise (i.e. '0')
public final boolean isDigitOne;
// The number represented in the tree modulo the input number
public final int val;
// The parent node in the tree
public final Node parent;
public Node(boolean isDigitOne, int val, Node parent)
{
this.isDigitOne = isDigitOne;
this.val = val;
this.parent = parent;
}
}
I replaced this class with two arrays of following inside a method of another class.
boolean[] product0 = new boolean[num];
int[] product1 = new int[num];
The rest of the program is very similar, the class implementation creates an object when needed where as the array implementation allocate the maximum needed memory at the beginning of the execution.
I measure the run time on both cases.
I noticed for smaller values of num, the execution time is almost same. But for larger value, the array implementation runs much quicker.
Here is the comparison:
My question is Why array implementation runs faster?
The class implementation is available in the following link as "Answer 3"
How to find the smallest number with just 0 and 1 which is divided by a given number?
A big part of it will be due to the location of data in memory. Arrays of objects don't directly store data next to each other in memory, instead it stores a reference to the location of where the data is stored in memory. This means that after accessing the array's location and grabbing the data from the array, the system then has to grab the data from the location in memory the value of the array is pointing to. Arrays of primitive data, however, have the data directly stored in the array. This means the system only has to do one look up to access the array instead of two to access the array and then access an object. Systems usually work so fast that it's not significantly noticeable for smaller amounts of data but it can become apparent when the amount of data is increased.
Think of arrays as being like a linear organizer with boxes for all the data, with the array reference, you can just add the index to get to the relevant cube, and it directly contains the boolean or num value, so you can pull it out, do assignments and such very easily, since it's all right there.
This changes if you have an object, since that will be stored in a reference somewhere. That's like having an organizer with an address of another organizer somewhere else. So then you have to follow the address to do whatever you want to do. On top of that, once you get to that other organizer, it has to find the place where you wanted the data! This consumes more memory and more cycles as the memory grows.
TL;DR: arrays are faster because they take up a constant size and everything can be referenced explicitly by index. Objects are slower because they are typically stored as references to other places in memory and the offsets of class object must be computed implicitly by referencing the object property.
I want to know why an array created in Java static even when we use the new keyword to define it.
From what I've read, the new keyword allocates a memory space in the heap whenever it is encountered during run time, so why give the size of the array at all during definition.
e.g. Why can't
int[] array1=new int[20];
simply be:
int[] array1=new int[];
I know that it does not grow automatically and we have ArrayList for that but then what is the use of keyword new in this? It could have been defined as int array1[20]; like we used to do it in C, C++ if it has to be static.
P.S. I know this is an amateurish question but I am an amateur, I tried to Google but couldn't find anything comprehensive.
This may be an amateurish question, but it is one of the best amateurish questions you could make.
In order for java to allow you to declare arrays without new, it would have to support an additional kind of data type, which would behave like a primitive in the sense that it would not require allocation, but it would be very much unlike a primitive in the sense that it would be of variable size. That would have immensely complicated the compiler and the JVM.
The approach taken by java is to provide the bare minimum and sufficient primitives in order to be able to get most things done efficiently, and let everything else be done using objects. That's why arrays are objects.
Also, you might be a bit confused about the meaning of "static" here. In C, "static" means "of file scope", that is, not visible by other object files. In C++ and in Java, "static" means "belongs to the class" rather than "belongs to instances of the class". So, the term "static" is not suitable for describing array allocation. "Fixed size" or "fixed, predefined size" would be more suitable terms.
Well, in Java everything is an object, including arrays (they have length and other data). Thats why you cannot use
int var[20];
In java that would be an int and the compiler would be confused. Instead by using this:
int[] var;
You are declaring that var is of type int[] (int array) so Java understands it.
Also in java the length of the array and other data are saved on the array, for this reason you don't have to declare size of array during declaration, instead when creating an array (using new) the data are saved.
Maybe there is a better reason that oracle may have answered already, but the fact that in Java everything is an object must have something to do with it. Java is quite specific about objects and types, unlike C where you have more freedom but everything is more loose (especially using pointers).
The main idea of the array data structure is that all its elements are located in the sequential row of memory cells. That is why you can not create array with variable size: it should be unbounbed space vector in memory for this purpose, which is impossible.
If you want change size of array, you should recreate it.
Since arrays are fixed-size they need to know how much memory to allocate at the time they are instantiated.
ArrayLists or other resizing data structures that internally use arrays to store data actually re-allocate larger arrays when their inner array data
structure fills up.
My understanding of OP's reasoning is:
new is used for allocating dynamic objects (which can grow like, ArrayList), but arrays are static (can't grow). So one of them is unnecessary: the new or the size of the array.
If that is the question, then the answer is simple:
Well, in Java new is necessary for every Object allocation, because in Java all objects are dynamically allocated.
Turns out that in Java, arrays are objects, different from C/C++ where they are not.
All of Java's variables are at most a single 64bit field. Either primitives like
integer (32bit)
long (64bit)
...
or references to Objects which depending on JVM / config / OS are 64 or 32 bit fields (but unlike 64bit primitives with atomicity guaranteed).
There is no such thing as C's int[20] "type". Neither is there C's static.
What int[] array = new int[20] boils down to is roughly
int* array = malloc(20 * sizeof(java_int))
Each time you see new in Java you can imagine a malloc and a call to the constructor method in case it's a real Object (not just an array). Each Object is more or less just a struct of a few primitives and more pointers.
The result is a giant network of relatively small structs pointing to other things. And the garbage collector's task is to free all the leaves that have fallen off the network.
And this is also the reason why you can say Java is copy by value: both primitives and pointers are always copied.
regarding static in Java: there is conceptually a struct per class that represents the static context of a class. That's the place where static instance variables are anchored. Non-static instance variables are anchored at with their own instance-struct
class Car {
static int[] forAllCars = new int[20];
Object perCar;
}
...
new Car();
translates very loosely (my C is terrible) to
struct Car-Static {
Object* forAllCars;
};
struct Car-Instance {
Object* perCar;
};
// .. class load time. Happens once and this is referenced from some root object so it can't get garbage collected
struct Car-Static *car_class = (struct Car-Static*) malloc(sizeof(Car-Static));
car_class->forAllCars = malloc(20 * 4);
// .. for every new Car();
struct Car-Instance *new_reference = (struct Car-Instance*) malloc(sizeof(Car-Instance));
new_reference.perCar = NULL; // all things get 0'd
new_reference->constructor();
// "new" essentially returns the "new_reference" then
I am a somewhat experienced Java developer and I keep seeing things like this
List<Integer> l = new ArrayList<Integer>(0);
which I really can't understand. What's the point of creating an ArrayList with an initial capacity of 0, when you know it's going to grow beyond the capacity?
Are there any known benefits of doing this?
It keeps the size (in memory) of the ArrayList very small, and is a tactic for when you want the variable to be non-null and ready to use, but don't expect for the List to be populated immediately. If you expect it to be populated immediately, it's best to give it a larger initial value - any "growing" of the ArrayList is internally creating a new primitive array, and copying items over. Growth of an ArrayList is expensive, and should be minimized.
Or, if you're creating lots of instances of a class that each contain one of these List properties. If you don't immediately plan on filling them, you can save a bit of memory by not allocating the room just yet.
However: There is a better way: Collections.emptyList(). Normally you'll want to protect access to that list directly, and (as an example) in your class provide domain-specific method calls that operate on the internal List. For example, let's say you have a School class that contains a List of student names. (Keeping it simple, note this class is not thread safe.)
public class School {
private List<String> studentNames = Collections.emptyList();
public void addStudentName(String name) {
if (studentNames.isEmpty()) {
studentNames = new ArrayList<String>();
}
studentNames.add(name);
}
public void removeStudentName(String name) {
studentNames.remove(name);
if (studentNames.isEmpty()) {
studentNames = Collections.emptyList(); // GC will deallocate the old List
}
}
}
If you're willing to make the isEmpty() checks and perform the initialization/assignment, this is a better alternative to creating lots of empty ArrayList instances, as Collections.emptyList() is a static instance (only one exists) and is not modifiable.
For java 6 (or openjdk 7), not specifying an initial size gives you a list within initial size set to 10. So depending on many factors of your usage of the list, it could be very slightly more memory and/or performance efficient to initialize the list with size 0.
For java 7, specifying an initial size 0 is functionally equivalent to not specifying an initial size.
However it is actually less efficient, since the call to the constructor with argument 0 incurs a call to new Object[0], whereas if you specify the no-args constructor, the initial elementData for your list is set to a statically defined constant named EMPTY_ELEMENTDATA.
Relevant code from ArrayList source:
/**
* Shared empty array instance used for empty instances.
*/
private static final Object[] EMPTY_ELEMENTDATA = {};
In other words the use of new ArrayList<Integer>(0); seems superfluous, there are no benefits to doing so, and I would use new ArrayList<Integer>(); instead.
If additions to that ArrayList are really unlikely and if it's important to keep the size of the ArrayList at a minimum, then I can see that being useful.
Or if the only purpose of that ArrayList is to be a return value from a method, where returning an empty list is a special message to the function caller, like "no results found".
Otherwise, not really.
By default ArrayList has capacity of 10 and it is resized by +50% each time.
By using lower initial capacity you can sometimes(in theory) save memory. On the other hand each resize is time consuming. In most cases it is just a sign of preemptive optimization.
It's always better approach to give a large value(if you how much list will exceed) to array list, because it will reduce resizing of list and hence optimize your execution time.
Initializing array list with value 0 create Empty array list which reducing memory if you know your list will not present more then 10 content's.
Depending on the contract you can avoid NullPointerExceptions by not having nulls. It is good practice in certain situations, see Effective Java by Joshua Bloch Item 43: Return empty arrays or collections, not nulls
I ran Findbug tool on my project and it found 18 problems of the type:
Storing reference to mutable object -> May expose internal representation by incorporating reference to mutable object
So I have a class which the constructor accepts array of type Object and assigns it to a private class member variable. Here is an example:
public Class HtmlCellsProcessing extends HtmlTableProcessing
{
private Object[] htmlCells;
public HtmlCellsProcessing(Object[] htmlCells)
{
this.htmlCells = htmlCells;
}
}
Here is a further explanation about the warning:
This code stores a reference to an externally mutable object into the internal representation of the object. If instances are accessed by untrusted code, and unchecked changes to the mutable object would compromise security or other important properties, you will need to do something different. Storing a copy of the object is better approach in many situations.
The advice they give me is pretty obvious but what happens if the array's size is very big and if I copy its values into the member variable array the application is going to take twice more memory.
What should I do in such a scenario where I have large amount of data? Should I pass it as reference or always copy it?
It depends. You have multiple concerns, including space, time and correctness.
A defensive copy helps you guarantee that the list items will not change without the knowledge of the class holding the array. But it will take O(n) time and space.
For a very large array, you may find that the costs of a defensive copy in space and time are harmful to your application. If you control all the code with access to the array, it may be reasonable to guarantee correctness without a defensive copy, and suppress the FindBugs warning on that class.
I'd duggest you to try using immutable list from guava library. See http://code.google.com/p/guava-libraries/wiki/ImmutableCollectionsExplained
If both encapsulation and performance are required, the typical solution is to pass a reference to an immutable object instead.
Therefore, rather than pass a huge array directly, encapsulate it in an object that does not permit the array's modification:
final class ArraySnapshot {
final Object[] array;
ArraySnapshot(Object[] array) {
this.array = Arrays.copyOf(array);
}
// methods to read from the array
}
This object can now be passed around cheaply, but the since it is immutable, encapsulation is ensured.
This idea, of course, if nothing new: it's what String does for char[].
The advice they give me is pretty obvious but what happens if the
array's size is very big and if i copy its values into the member
variable array the application is going to take twice more memory.
In Java you copy references not object themselves unless you do a deep copy.
So if your only concern is to get rid of the warning (which is valid though especially if you don't understand what you actually store and you have multiple threads modifying the objects) you could do a copy without some much concerns on memory.
The root of the problem for me is that Java does not allow references.
The problem can be summarized succinctly. Imagine you have a List of Blob objects:
class Blob {
public int xpos;
public int ypos;
public int mass;
public boolean dead;
private List<Object> giganticData;
public void blobMerge(Blob aBlob) {
. . .
if (. . .) {
this.dead = true;
} else {
aBlob.dead = true;
}
}
}
If two blobs are close enough, they should be merged, meaning one of the two blobs being compared should take on the attributes of the other (in this case, adding the mass and merging the giganticData sets) and the other should be marked for deletion from the list.
Setting aside the problem of how to optimally identify adjacent blobs, a stackoverflow question in its own right, how do you keep the blobMerge() logic in the Blob class? In C or C++ this would be straightforward, as you could just pass one Blob a pointer to the other and the "host" could do anything it likes to the "guest".
However, blobMerge() as implemented above in Java will operate on a copy of the "guest" Blob, which has two problems. 1) There is no need to incur the heavy cost of copying giganticData, and 2) the original copy of the "guest" Blob will remain unaffected in the containing list.
I can only see two ways to do this:
1) Pass the copies in, doing everything twice. In other words, Blob A hosts Blob B and Blob B hosts Blob A. You end up with the right answer, but have done way more work than necessary.
2) Put the blobMerge() logic in the Class that contains the containing List. However, this approach scales very poorly when you start subclassing Blob (BlueBlob, RedBlob, GreenBlob, etc.) such that the merge logic is different for every permutation. You end up with most of the subclass-specific code in the generic container that holds the list.
I've seen something about adding References to Java with a library, but the idea that you have to use a library to use a Reference put me off that idea.
Why would it operate on a copy? Java passes references to objects. And references are very much like C++ pointers.
Um... a reference is passed not a copy of the entire object. The original object will be modified and no data is actually moved around.