This is a code style & design question, perhaps dealing with tradeoffs. It is probably obvious.
Backstory:
When migrating from C++ to Java I encountered a difference I am now not sure how to deal with.
In all opengl calls you pass an array with an offset and an aditional param wich tells the function how the passed array is structured. Take glDrawArrays for an example.
So when drawing, it would be best to have all my vertex in an array, a FloatBuffer.
However, I also need those vertex for my physics callculation.
The question:
Should I create a separate buffer for physics and copy its results to the FloatBuffer every update, dealing with a Vec3f and Point3f classes since they can not be passed to opengl functions because they might be fragmented (or can they?).
Or should I have a seperate class for dealing with my structures which takes the offset along with the array.
public static void addVec3(float[] vec3in_a, int offset_a, float[] vec3in_b, int offset_b, float[] vec3out, int offset_out)
And what should the offsets represent. Should they account for vec3 size and move apropriatly (offset_a*=3), like an array of Vec3 would behave, or should it just offset as a normal float array.
Thank you:)
can't you do the physics calculations on the GPU? JOCL or shaders would be a possible route. Usually you try to prevent to do all the vertex transformation on the CPU (in java, C, whatever) and sent it to the GPU every frame.
if you really have to do that in java (CPU) you could adapt your maths classes (Vec, Point etc) to store the data in a FloatBuffer. But this will be certainly slower compared to primitive floats since the read/write operation to a FB is not without overhead.
Without knowing what you are actually doing, a copy from FB -> math object and back could be even feasible. If it is not fast enough... optimize later :)
Related
I was recently watching a tutorial on writing an OpenGL game in Java with LWJGL. In it, the narrator/author first converts the float array from float[] to a FloatBuffer before giving it toGL15.glBufferData(). Initially I didn't think much of this, but I accidentally gave GL15.glBufferData() the float array directly instead of converting it to a FloatBuffer, and the code ended up working anyway. I looked up the documentation and it only says "Array version of: BufferData" as comment, which still leaves me uncertain.
This leads me to wonder, is there any point of going through the effort to convert float[] to a FloatBuffer when calling GL15.glBufferData() in LWJGL 3.0 with Java 15, assuming the FloatBuffer was given its value by using FloatBuffer::put() and thus contains the same data?
The difference between a direct NIO Buffer and a float array is that when you hand a float array to LWJGL, it will first need to pin the array using a JNI call into the Java runtime, which will give LWJGL's native code a virtual memory address it can hand over to the actual called native function. Then, the float array needs to be unpinned again, making it eligible again for garbage collection.
With a direct NIO Buffer, this does not need to happen, because direct NIO Buffers are already backed by a native virtual memory address. So using direct NIO Buffers is considerably faster (performance-wise). Of course, all this performance is somewhat lost when you first create a float array and then copy it into the NIO Buffer.
You are expected to simply not use a float array to being with, but only ever populate your direct NIO Buffer.
I would recommend using a float[] instead of a FloatBuffer because a float[] seems to work more, and it's exactly what you expect: an array of bytes on the stack. Also, behind the scenes, a FloatBuffer is really just a float[] so it doesnt matter TOO much at the end of the day. To continue off my point, converting a float[] to a FloatBuffer seems unnecessary and overly complicated.
I have a particle system and for that I render (for 1 particle effect for example) 100 quads with texture on it. If I add several particle effects it lags because each particle has its own velicity (2f vector), position (3f vec), etc... (Vectors are from LwJGL)
As a consequence each instance means like 5 or 6 data types. Now my question is:
Is there a way of making this better, so that not every instance has 5 new vectors? (And I know, that there are many other better ways of creating a particle system, but this one I choose was easy and I can practice "performance boosting"..
Ok, so, I will refer to this code where you probably get inspired by.
I also suppose you have at least GL 3.3 profile.
In theory, to optimaze at best you should move Map<ParticleTexture, List<Particle>> particles (using a texture atlas) on the gpu and upload only the changing data per-frame, like camera.
But this is not easy to do in one step, so I suggest you to modify your current algorithm step by step by moving one thing at time on the gpu.
Some observations:
in prepare() and finishRendering(), the enabling of the i-th VertexAttribArray is part of the vao, if you bind/unbind the vao, it's enough. glEnableVertexAttribArray and glDisableVertexAttribArray can be removed
use uniform buffer, don't have all those single uniforms alone.
loader.updateVbo() is quite expensive, it creates a FloatBuffer every render and clear the buffer before copying the data.
You should allocate a float [] or a FloatBuffer just once, reuse it and call simply glBufferSubData, avoiding glBufferData
I am learning JOGL on my own. I have just switched from GL2 to GL3. I found that there are very few tutorials on the GL3. Also, I found that GL3 is completely different from GL2. As far as I know, many ppl used buffer to hold all the vertices and bind them to OpenGL. But, when they were initialising the buffers, they used arrays, which are fixed in length. How am I going to work on varying number of vertices or objects, if the number of vertices was fixed from the beginning? Are there any simple examples? In general, how can I make my program more "dynamic"? (i.e. render a user-defined 3D world)
The best i can think of is creating a large buffer at the initializing stage and modify the data with glBufferSubData(). Other way is recreate the buffer with glBufferData() though this one is not preferable because of how expensive it is to recreate the buffer every time a new entity/object is created to/removed from the world (Probably fine once in a while).
I am starting to port my games over to Android from iOS and I've run into a problem.
In my standard workflow on iOS I would store my vertex info in an array of structs:
typedef struct{
float x, y, z;
} Vector3;
Vector3 verts[];
That sort of thing.
Then when it came time to send my vertex data to GL, I would just point to the verts array and it would treat it like an array of floats.
glVertexAttribPointer(Vertex_POSITION, 3, GL_FLOAT, 0, 0, (void *)verts);
How do I do this in Java?
I tried making a Vector3 class and putting a few of them into an array, but it throws an error when I try to stuff that array into GL.
With the direction you are going, I don't think this can work directly. When you have an array of objects in Java (like an array of your Vector3 class ), the array contains a sequence of references to those objects, and each object is allocated in a separate chunk of memory.
What you need to pass to OpenGL entry points like glVertexAttribPointer() or glBufferData() is a contiguous block of memory containing the data. An array of objects does not have that layout, so it's simply not possible to use it directly.
You have various options:
Don't use arrays of objects to store your data. Instead, use a data structure that stores the data in a contiguous block of memory. For float values, this can be a float[], or a FloatBuffer. The Java entry points in Android take buffers as arguments, so using buffers is the most straightforward approach. You should be able to find plenty of examples in Android OpenGL examples.
Copy the data to temporary buffers before making the API calls. This is obviously not very appealing, since extra data copies cause unproductive overhead.
Write your OpenGL code in C++, and compile it with the NDK. This might also save you a lot of work if you're porting C++ code from another platform. If you intermixed the OpenGL code with Objective-C on iOS, it will still take work.
I've recently began learning OpenGL, starting with immediate mode, glPush/PopMatrix, and the glTranslate/Rotate/Scale functions. I've switched over to vertex buffer objects for storing geometry, but I'm still using the push/pop matrix and transform functions. Is there a newer, more efficient method of performing these operations?
I've heard of glMultMatrix, but some sources have said this is less efficient.
If at all relevant, I am using LWJGL with Java for rendering.
edit: Does anyone know about the performance impact of calling glViewport and gluPerspective, as well as other standard initialization functions? I have been told that it is often good practice to call these init functions along with the rendering code every update.
For modern OpenGL you want to write a vertex shader and multiply each vertex by the appropriate transform matrix in there. You'll want to pass in the matrices you need (probably model, view, and projection). You can calculate those matrices on the CPU on each render pass as needed. This means you won't need gluPerspective. You probably only need to call glViewport once unless you're trying to divide up the window and draw different things in each section. But I wouldn't expect it to cause any performance issues. You can always profile to see for sure.