I am learning JOGL on my own. I have just switched from GL2 to GL3. I found that there are very few tutorials on the GL3. Also, I found that GL3 is completely different from GL2. As far as I know, many ppl used buffer to hold all the vertices and bind them to OpenGL. But, when they were initialising the buffers, they used arrays, which are fixed in length. How am I going to work on varying number of vertices or objects, if the number of vertices was fixed from the beginning? Are there any simple examples? In general, how can I make my program more "dynamic"? (i.e. render a user-defined 3D world)
The best i can think of is creating a large buffer at the initializing stage and modify the data with glBufferSubData(). Other way is recreate the buffer with glBufferData() though this one is not preferable because of how expensive it is to recreate the buffer every time a new entity/object is created to/removed from the world (Probably fine once in a while).
Related
I have a particle system and for that I render (for 1 particle effect for example) 100 quads with texture on it. If I add several particle effects it lags because each particle has its own velicity (2f vector), position (3f vec), etc... (Vectors are from LwJGL)
As a consequence each instance means like 5 or 6 data types. Now my question is:
Is there a way of making this better, so that not every instance has 5 new vectors? (And I know, that there are many other better ways of creating a particle system, but this one I choose was easy and I can practice "performance boosting"..
Ok, so, I will refer to this code where you probably get inspired by.
I also suppose you have at least GL 3.3 profile.
In theory, to optimaze at best you should move Map<ParticleTexture, List<Particle>> particles (using a texture atlas) on the gpu and upload only the changing data per-frame, like camera.
But this is not easy to do in one step, so I suggest you to modify your current algorithm step by step by moving one thing at time on the gpu.
Some observations:
in prepare() and finishRendering(), the enabling of the i-th VertexAttribArray is part of the vao, if you bind/unbind the vao, it's enough. glEnableVertexAttribArray and glDisableVertexAttribArray can be removed
use uniform buffer, don't have all those single uniforms alone.
loader.updateVbo() is quite expensive, it creates a FloatBuffer every render and clear the buffer before copying the data.
You should allocate a float [] or a FloatBuffer just once, reuse it and call simply glBufferSubData, avoiding glBufferData
I want to make my VBO move and I was wondering if I should update the entire VBO with updated values using glBufferSubData or just use the deprecated glTranslatef to move my thing.
And if I were to just update the values in the VBO, should I use a separate VBO for the vertices, normals, and texture coordinates or should I put them all in one?
Thanks.
1: Usually you should always try to reduce the amount of data transferred between CPU and GPU to a minimum to keep the performance at the maximum.
So, updating the entire vertex buffer using glBufferSubData() should be avoided and as long as possible and transformations such as glTranslate() and glLoadMatrix() (which are deprecated) or shaders should be used.
But since you are already working with VBOs I would recommend to use shaders to do the transformation using a shader uniform variable and glUniformMatrix().
2: Separating the vertices, normals, and texture coordinates or combining them in one VBO is up to you. In most cases I combine them because it produces only one handle and I need only one glBufferData() call. But if there is a situation when only one part like the texture coordinates is updated and the rest stays as it is, then separating would be better as you could also update them separately.
Even more advanced:
If you are using buffer interlacing than you obviously have to combine them.
Sometimes using buffer interlacing can be faster than without because the data needed for a single vertex is kept together and the caches can be used. But you have to try out what impact that has on the performance (if any) as it strongly depends on the hardware you are using.
I've recently began learning OpenGL, starting with immediate mode, glPush/PopMatrix, and the glTranslate/Rotate/Scale functions. I've switched over to vertex buffer objects for storing geometry, but I'm still using the push/pop matrix and transform functions. Is there a newer, more efficient method of performing these operations?
I've heard of glMultMatrix, but some sources have said this is less efficient.
If at all relevant, I am using LWJGL with Java for rendering.
edit: Does anyone know about the performance impact of calling glViewport and gluPerspective, as well as other standard initialization functions? I have been told that it is often good practice to call these init functions along with the rendering code every update.
For modern OpenGL you want to write a vertex shader and multiply each vertex by the appropriate transform matrix in there. You'll want to pass in the matrices you need (probably model, view, and projection). You can calculate those matrices on the CPU on each render pass as needed. This means you won't need gluPerspective. You probably only need to call glViewport once unless you're trying to divide up the window and draw different things in each section. But I wouldn't expect it to cause any performance issues. You can always profile to see for sure.
I am currently working on a project that renders large oil-wells and sub-surface data on to the android tablet using OpenGL ES 2.0.
The data comes in from a restful call made by the client (Tablet) to the server. I need to render two types of data. One being a set of vertices where I just join all the vertices (Well rendering) and the other is the subsurface rendering where each surface has huge triangle-data associated with them.
I was able to reduce the size of the well by approximating the next point and constructing the data that is to be sent to the client. But this cannot be done to the surface data as each and every triangle is important to get the triangles joined and form the surface.
I would appreciate if you guys suggest an approach to either reduce the data from the server or to reduce the time taken to render such a huge data effectively.
the way you can handle such complex mesh really depends on the scope of your project. Unfortunately there is no much we can say based on the provided inputs and the activity itself is not an easy task.
Usually when the mesh is very complex a typical approach to make the rendering process fast is to adopt dynamic Level Of Details (in programming terminology LOD).
The idea is to render "distant" meshes with a very low LOD (and therefore having a much lower number of vertices to be rendered) and there replace the mesh with an higher resolution every time the camera approaches the mesh's details.
This is a technique very used in computer games, for instance when a terrain needs to be rendered. When the player is in a particular sector of the MAP, the mesh of that sector is in High level of detail, the others are in low detail. As soon as the player moves, the different sectors become in "high resolution" (allow me the term).
It is not an easy way to do it but it works in many many situations.
In this gamasutra article, there are plenty of information on how this technique works:
http://www.gamasutra.com/view/feature/131596/realtime_dynamic_level_of_detail_.php?print=1
The idea, in your case, would be to take the mesh provided by the web service and handle it as the HD version of the mesh. Then (particularly if the mesh is composed by different objects), apply a triangular mesh simplification algorithm to create LD meshes of the same objects. An example on the way you could proceed is well described here:
http://herakles.zcu.cz/~skala/PUBL/PUBL_2002/2002_Mesh-Simplification-ICCS2002.pdf
I hope to have helped in some way.
Cheers
Maurizio
I have successfully created an object loader in java that loads in vertices, indices, texture-coordinates and normals. The object loader, reads in from Wavefont OBJ files.
It is relatively simple, however as soon as I try to load in a more complex file with texture-coordinate indices, and normal indices, I have no idea what to do with these extra sets of index's? I could not find any opengl (or in this case opengl es 1.1) methods to parse the texture and normal indices too. This has not only been bugging me in opengl for android but also previously did in webgl, so any help would be much appreciated.
It is rather annoying that there are so many tutorials that talk about how to load vertices, indices, texture coords and normals but I am yet to see one (for opengl es) where they load in texture and normal indices.
Do I have to reorder or rebuild the texture coords / normal arrays based on indices? or some function im missing or?
but I am yet to see one (for opengl es) where they load in texture and normal indices.
There's a reason for that: you can't. This is generally why the Wavefront OBJ format is bad for loading into OpenGL/D3D applications.
Each vertex, each combination of position/normal/texCoord/etc data, must be unique. If you are doing index rendering, each index refers to a specific combination of position/normal/texCoord/etc.
In short, you can use only one index to render with. That index indexes into all of the attribute arrays simultaneously. So if your data indexes different attributes with different index lists, you must convert your data to do things correctly. The best way to do this is via some kind of off-line tool.