Modern method of transforming modelview matrix in OpenGL? - java

I've recently began learning OpenGL, starting with immediate mode, glPush/PopMatrix, and the glTranslate/Rotate/Scale functions. I've switched over to vertex buffer objects for storing geometry, but I'm still using the push/pop matrix and transform functions. Is there a newer, more efficient method of performing these operations?
I've heard of glMultMatrix, but some sources have said this is less efficient.
If at all relevant, I am using LWJGL with Java for rendering.
edit: Does anyone know about the performance impact of calling glViewport and gluPerspective, as well as other standard initialization functions? I have been told that it is often good practice to call these init functions along with the rendering code every update.

For modern OpenGL you want to write a vertex shader and multiply each vertex by the appropriate transform matrix in there. You'll want to pass in the matrices you need (probably model, view, and projection). You can calculate those matrices on the CPU on each render pass as needed. This means you won't need gluPerspective. You probably only need to call glViewport once unless you're trying to divide up the window and draw different things in each section. But I wouldn't expect it to cause any performance issues. You can always profile to see for sure.

Related

Best way to save data of many instances

I have a particle system and for that I render (for 1 particle effect for example) 100 quads with texture on it. If I add several particle effects it lags because each particle has its own velicity (2f vector), position (3f vec), etc... (Vectors are from LwJGL)
As a consequence each instance means like 5 or 6 data types. Now my question is:
Is there a way of making this better, so that not every instance has 5 new vectors? (And I know, that there are many other better ways of creating a particle system, but this one I choose was easy and I can practice "performance boosting"..
Ok, so, I will refer to this code where you probably get inspired by.
I also suppose you have at least GL 3.3 profile.
In theory, to optimaze at best you should move Map<ParticleTexture, List<Particle>> particles (using a texture atlas) on the gpu and upload only the changing data per-frame, like camera.
But this is not easy to do in one step, so I suggest you to modify your current algorithm step by step by moving one thing at time on the gpu.
Some observations:
in prepare() and finishRendering(), the enabling of the i-th VertexAttribArray is part of the vao, if you bind/unbind the vao, it's enough. glEnableVertexAttribArray and glDisableVertexAttribArray can be removed
use uniform buffer, don't have all those single uniforms alone.
loader.updateVbo() is quite expensive, it creates a FloatBuffer every render and clear the buffer before copying the data.
You should allocate a float [] or a FloatBuffer just once, reuse it and call simply glBufferSubData, avoiding glBufferData

How to reduce time to render large surface data in OpenGL

I am currently working on a project that renders large oil-wells and sub-surface data on to the android tablet using OpenGL ES 2.0.
The data comes in from a restful call made by the client (Tablet) to the server. I need to render two types of data. One being a set of vertices where I just join all the vertices (Well rendering) and the other is the subsurface rendering where each surface has huge triangle-data associated with them.
I was able to reduce the size of the well by approximating the next point and constructing the data that is to be sent to the client. But this cannot be done to the surface data as each and every triangle is important to get the triangles joined and form the surface.
I would appreciate if you guys suggest an approach to either reduce the data from the server or to reduce the time taken to render such a huge data effectively.
the way you can handle such complex mesh really depends on the scope of your project. Unfortunately there is no much we can say based on the provided inputs and the activity itself is not an easy task.
Usually when the mesh is very complex a typical approach to make the rendering process fast is to adopt dynamic Level Of Details (in programming terminology LOD).
The idea is to render "distant" meshes with a very low LOD (and therefore having a much lower number of vertices to be rendered) and there replace the mesh with an higher resolution every time the camera approaches the mesh's details.
This is a technique very used in computer games, for instance when a terrain needs to be rendered. When the player is in a particular sector of the MAP, the mesh of that sector is in High level of detail, the others are in low detail. As soon as the player moves, the different sectors become in "high resolution" (allow me the term).
It is not an easy way to do it but it works in many many situations.
In this gamasutra article, there are plenty of information on how this technique works:
http://www.gamasutra.com/view/feature/131596/realtime_dynamic_level_of_detail_.php?print=1
The idea, in your case, would be to take the mesh provided by the web service and handle it as the HD version of the mesh. Then (particularly if the mesh is composed by different objects), apply a triangular mesh simplification algorithm to create LD meshes of the same objects. An example on the way you could proceed is well described here:
http://herakles.zcu.cz/~skala/PUBL/PUBL_2002/2002_Mesh-Simplification-ICCS2002.pdf
I hope to have helped in some way.
Cheers
Maurizio

Switching from OpenGL ES 1.0 to 2.0

I have been developing an Android app using OpenGL 1.0 for quite some time using a naive approach to rendering, basically making a call to glColor4f(...) and glDrawArrays(...) with FloatBuffers each frame. I am hitting a point where graphics is becoming a huge bottleneck as I add more UI elements and the number of draw calls increases.
So I'm now looking for the best way to group all of these calls into one (or two or three) draw calls. It looks like the cleanest, most efficient and canonical way to do this is to use VBO objects, available from OpenGL ES 2.0 on. However, this would require a HUGE refactoring on my part to switch my whole graphics backend from ES 1.0 to ES 2.0. I am not sure if this is a good decision, or if there are acceptable ways to group my drawing calls in 1.0 that would work fine for relatively simple 2D data (squares, rounded rectangle TRIANGLE_FANs, etc.), or if it really might be worth biting the bullet and making the switch. I might also mention that I have a HEAVY reliance on translation and scaling that is so convenient with the fixed pipeline of ES 1.0.
Looking around, I am surprised to find almost NO people in my position, talking about the tradeoffs and complexity at hand for such a switch. Any thought?
I have a HEAVY reliance on translation and scaling
Note you can't batch anything if you change model-view matrix between drawcalls. (ES2 didn't change that).
Vbo a available from opengl ES 1.1. And they are probably available for the device you are targeting. Even for ES1.0 (ARB_vertex_buffer_object)
You can create a big VBO with world space geometry (=resolve scaling and translation with cpu) and draw that. Even if you update this vbo each frame, in my experience, it's fast enough. Send thousands of small drawcalls is almost always the slowest.
Moving from a fixed pipline to a full vertex/fragment shader pipline is not easy at all. It require a good amount of knowledge in 3d. careful. Write a prototype first. (world-space or object-space lighting ? how transform normals ? ...)
Vivien

passing vectors (and other structures) in opengl and own libraries

This is a code style & design question, perhaps dealing with tradeoffs. It is probably obvious.
Backstory:
When migrating from C++ to Java I encountered a difference I am now not sure how to deal with.
In all opengl calls you pass an array with an offset and an aditional param wich tells the function how the passed array is structured. Take glDrawArrays for an example.
So when drawing, it would be best to have all my vertex in an array, a FloatBuffer.
However, I also need those vertex for my physics callculation.
The question:
Should I create a separate buffer for physics and copy its results to the FloatBuffer every update, dealing with a Vec3f and Point3f classes since they can not be passed to opengl functions because they might be fragmented (or can they?).
Or should I have a seperate class for dealing with my structures which takes the offset along with the array.
public static void addVec3(float[] vec3in_a, int offset_a, float[] vec3in_b, int offset_b, float[] vec3out, int offset_out)
And what should the offsets represent. Should they account for vec3 size and move apropriatly (offset_a*=3), like an array of Vec3 would behave, or should it just offset as a normal float array.
Thank you:)
can't you do the physics calculations on the GPU? JOCL or shaders would be a possible route. Usually you try to prevent to do all the vertex transformation on the CPU (in java, C, whatever) and sent it to the GPU every frame.
if you really have to do that in java (CPU) you could adapt your maths classes (Vec, Point etc) to store the data in a FloatBuffer. But this will be certainly slower compared to primitive floats since the read/write operation to a FB is not without overhead.
Without knowing what you are actually doing, a copy from FB -> math object and back could be even feasible. If it is not fast enough... optimize later :)

Fast graphing library

I've been using Incanter for my graphing needs, which was adequate but slow for my previous needs.
Now I need to embed a graph in a JPanel. Users will need to interact with the graph (e.g. clicking on certain points which the program would need to receive and deal with) by dragging and clicking. Zooming in a out is a must as well.
I've heard about JFreeChart on other SO discussions, but I see that Incanter uses that as it's graphing engine, and it seemed somewhat slow then. It it actually fast, but perhaps Incanter is doing things that slow it down?
I'm graphing up to 2 million points (simple xy-plots, really), though generally will be graphing less. Using Matlab, this is plotted in a few seconds, but Incanter can hang for minutes.
So is JFreeChart the way to go? Or something else, given my needs?
(Also, it needs to be free, as it is for research.)
Unfortunately, general purpose graphing solutions probably aren't going to scale well to 2 million points - that's big enough that you will need something specialized if you want interactive performance.
I can see a few sensible options:
Write your own custom "plotter" function that is optimized for drawing large numbers of points. You'd have to test it, but I think you might just about get the performance you want by writing the points directly to a BufferedImage using setRGB in a tight loop. If that still isn't fast enough, you can write the points directly into a byte array and construct a MemoryImageSource.
Exclude points so that you are only drawing e.g. 10,000 points maximum. This may be perfectly acceptable as long as you only really care about the overall shape of the scatter plot rather than individual points.
Pre-render all the points into e.g. a large BufferedImage then allow users to zoom in and out / interact with this static image. You might be able to "hack" JFreeChart to do this.
If OpenGL is an option (will require native code + getting up a steep learning curve!), then drop all the points in a big vertex array and get the graphics card to do it all for you. Will handle 2 million points in real-time without any difficulty.
MathGL is fast and free (GPL) plotting library. But I never test its java interface (swig based) since I'm not familiar with java :( . So, if one can help with testing then I'll be thankful.

Categories