I am starting to port my games over to Android from iOS and I've run into a problem.
In my standard workflow on iOS I would store my vertex info in an array of structs:
typedef struct{
float x, y, z;
} Vector3;
Vector3 verts[];
That sort of thing.
Then when it came time to send my vertex data to GL, I would just point to the verts array and it would treat it like an array of floats.
glVertexAttribPointer(Vertex_POSITION, 3, GL_FLOAT, 0, 0, (void *)verts);
How do I do this in Java?
I tried making a Vector3 class and putting a few of them into an array, but it throws an error when I try to stuff that array into GL.
With the direction you are going, I don't think this can work directly. When you have an array of objects in Java (like an array of your Vector3 class ), the array contains a sequence of references to those objects, and each object is allocated in a separate chunk of memory.
What you need to pass to OpenGL entry points like glVertexAttribPointer() or glBufferData() is a contiguous block of memory containing the data. An array of objects does not have that layout, so it's simply not possible to use it directly.
You have various options:
Don't use arrays of objects to store your data. Instead, use a data structure that stores the data in a contiguous block of memory. For float values, this can be a float[], or a FloatBuffer. The Java entry points in Android take buffers as arguments, so using buffers is the most straightforward approach. You should be able to find plenty of examples in Android OpenGL examples.
Copy the data to temporary buffers before making the API calls. This is obviously not very appealing, since extra data copies cause unproductive overhead.
Write your OpenGL code in C++, and compile it with the NDK. This might also save you a lot of work if you're porting C++ code from another platform. If you intermixed the OpenGL code with Objective-C on iOS, it will still take work.
Related
I was recently watching a tutorial on writing an OpenGL game in Java with LWJGL. In it, the narrator/author first converts the float array from float[] to a FloatBuffer before giving it toGL15.glBufferData(). Initially I didn't think much of this, but I accidentally gave GL15.glBufferData() the float array directly instead of converting it to a FloatBuffer, and the code ended up working anyway. I looked up the documentation and it only says "Array version of: BufferData" as comment, which still leaves me uncertain.
This leads me to wonder, is there any point of going through the effort to convert float[] to a FloatBuffer when calling GL15.glBufferData() in LWJGL 3.0 with Java 15, assuming the FloatBuffer was given its value by using FloatBuffer::put() and thus contains the same data?
The difference between a direct NIO Buffer and a float array is that when you hand a float array to LWJGL, it will first need to pin the array using a JNI call into the Java runtime, which will give LWJGL's native code a virtual memory address it can hand over to the actual called native function. Then, the float array needs to be unpinned again, making it eligible again for garbage collection.
With a direct NIO Buffer, this does not need to happen, because direct NIO Buffers are already backed by a native virtual memory address. So using direct NIO Buffers is considerably faster (performance-wise). Of course, all this performance is somewhat lost when you first create a float array and then copy it into the NIO Buffer.
You are expected to simply not use a float array to being with, but only ever populate your direct NIO Buffer.
I would recommend using a float[] instead of a FloatBuffer because a float[] seems to work more, and it's exactly what you expect: an array of bytes on the stack. Also, behind the scenes, a FloatBuffer is really just a float[] so it doesnt matter TOO much at the end of the day. To continue off my point, converting a float[] to a FloatBuffer seems unnecessary and overly complicated.
I am learning JOGL on my own. I have just switched from GL2 to GL3. I found that there are very few tutorials on the GL3. Also, I found that GL3 is completely different from GL2. As far as I know, many ppl used buffer to hold all the vertices and bind them to OpenGL. But, when they were initialising the buffers, they used arrays, which are fixed in length. How am I going to work on varying number of vertices or objects, if the number of vertices was fixed from the beginning? Are there any simple examples? In general, how can I make my program more "dynamic"? (i.e. render a user-defined 3D world)
The best i can think of is creating a large buffer at the initializing stage and modify the data with glBufferSubData(). Other way is recreate the buffer with glBufferData() though this one is not preferable because of how expensive it is to recreate the buffer every time a new entity/object is created to/removed from the world (Probably fine once in a while).
I think I am going to make an attempt at removing display lists from my implementation and put in Vertex Arrays. I know Vertex Buffer Objects are more efficient, however Vertex Arrays have been around since openGL 1.1 and as such function in almost every environment I believe. How is compatibility for vertex buffer objects?
Vertex Buffer Objects are essentially Vertex Arrays, where instead of pointing to a address in your program's process address space, OpenGL gives you a handle to OpenGL managed memory and the Vertex Array pointers are offsets into the memory given out by that handle.
It is actually very easy to add VBO support to programs that already make use of Vertex Arrays. It's as easy to conditionally use VBOs if they are available and fall back to client addess space Vertex Arrays if not.
I have successfully created an object loader in java that loads in vertices, indices, texture-coordinates and normals. The object loader, reads in from Wavefont OBJ files.
It is relatively simple, however as soon as I try to load in a more complex file with texture-coordinate indices, and normal indices, I have no idea what to do with these extra sets of index's? I could not find any opengl (or in this case opengl es 1.1) methods to parse the texture and normal indices too. This has not only been bugging me in opengl for android but also previously did in webgl, so any help would be much appreciated.
It is rather annoying that there are so many tutorials that talk about how to load vertices, indices, texture coords and normals but I am yet to see one (for opengl es) where they load in texture and normal indices.
Do I have to reorder or rebuild the texture coords / normal arrays based on indices? or some function im missing or?
but I am yet to see one (for opengl es) where they load in texture and normal indices.
There's a reason for that: you can't. This is generally why the Wavefront OBJ format is bad for loading into OpenGL/D3D applications.
Each vertex, each combination of position/normal/texCoord/etc data, must be unique. If you are doing index rendering, each index refers to a specific combination of position/normal/texCoord/etc.
In short, you can use only one index to render with. That index indexes into all of the attribute arrays simultaneously. So if your data indexes different attributes with different index lists, you must convert your data to do things correctly. The best way to do this is via some kind of off-line tool.
This is a code style & design question, perhaps dealing with tradeoffs. It is probably obvious.
Backstory:
When migrating from C++ to Java I encountered a difference I am now not sure how to deal with.
In all opengl calls you pass an array with an offset and an aditional param wich tells the function how the passed array is structured. Take glDrawArrays for an example.
So when drawing, it would be best to have all my vertex in an array, a FloatBuffer.
However, I also need those vertex for my physics callculation.
The question:
Should I create a separate buffer for physics and copy its results to the FloatBuffer every update, dealing with a Vec3f and Point3f classes since they can not be passed to opengl functions because they might be fragmented (or can they?).
Or should I have a seperate class for dealing with my structures which takes the offset along with the array.
public static void addVec3(float[] vec3in_a, int offset_a, float[] vec3in_b, int offset_b, float[] vec3out, int offset_out)
And what should the offsets represent. Should they account for vec3 size and move apropriatly (offset_a*=3), like an array of Vec3 would behave, or should it just offset as a normal float array.
Thank you:)
can't you do the physics calculations on the GPU? JOCL or shaders would be a possible route. Usually you try to prevent to do all the vertex transformation on the CPU (in java, C, whatever) and sent it to the GPU every frame.
if you really have to do that in java (CPU) you could adapt your maths classes (Vec, Point etc) to store the data in a FloatBuffer. But this will be certainly slower compared to primitive floats since the read/write operation to a FB is not without overhead.
Without knowing what you are actually doing, a copy from FB -> math object and back could be even feasible. If it is not fast enough... optimize later :)