I'm using JOGL (OpenGL for Java) for my application and I need to draw tons of strings on screen at once and my current solution is far too slow. Right now I'm drawing the strings using TextRenderer using the draw3D method and for even a moderate number of strings (around 300-500), it just kills the FPS. I started messing with drawing text onto the object textures, which is much faster, but there are a few problems with it. The first is that allocating all those textures requires a lot of memory. The second is that I need to find a way to size the texture so its only as big as the string and then map it to the object without stretching. The problem there is that all these thousands of boxes are using a single model being rendered with a call list. I'm not sure its possible to change the texture mapping for each object in that situation.
I don't mind if the text appears flat or 3D, it just has to be positioned in 3D space. I would prefer to render the text in the highest quality possible without sacrificing too much speed, since readability of the text is the most important part of the application. Also, nearly all of the strings are different, there aren't many duplicates.
So, my question: Am I going down the right path with drawing the strings on the textures, and if so, how can I overcome those 2 problems? Or is there another method that would suit my needs?
Depending on exactly how TextRenderer works - you might be able to use display lists to batch up your text drawing commands.
If TextRenderer works by having a texture of individual character glyphs and piecing together a string a glyph at a time: it'll be fine. just bookend your text drawing code with glNewList and glEndList. Once a list is defined, just use glCallList to use it.
If however, TextRenderer works by drawing complete strings into a texture and using one quad per string - display lists may not work. If the strings in one batch do not all fit within TextRenderer's cache, it will delete the least-recently used one to reclaim some space. Display lists will only recreate the OpenGL calls made, and so the work done by TextRenderer to update the string cache texture will be lost and you'll get incorrect output. From a quick scan of the source, I suspect that TextRenderer works in this manner.
To summarise: Display lists will greatly speed up your rendering, but will only if you don't overflow TextRenderer's string cache texture and don't use the TextRenderer after the display list has been defined.
If you can't meet these constraints you're going to have to go a bit hardcore and write your own text renderer that renders glyph-by-glyph - it'll then be trivial to cache the output geometry and extremely quick to re-render. There's an example of such a system here, with the tool to create a font here. It uses LWJGL rather than JOGL, but the translation between the two will be the least of your worries if you want to integrate it - it's meshed with the texture management etc.
Related
I am developing an isometric game in Java2D. I.e, note that I do not have direct access to hardware pixel shaders (real-time software pixel shaders aren't practical. I can do a single pass on every entity texture without a noticeable hit on performance)
I know the typical method would be to somehow encode the depth of the individual pixels into a depth buffer and look that up. However, I don't know how I can do that efficiently in Java2D. How would I store the depth buffer? How would I filter out the alpha in an image? Etc.
Up until now I have just been reversing the projection matrix I use to calculate the tile-coordinates. However, that doesn't work well when you have entities that render outside of those tile's bounds.
Another method I considered was using a color-map, however I have the same problems with this as I do with the depth buffer (and if I can get the depth buffer working I'd much rather use that.)
Here is a picture of what I am working with:
I've resolved this quite nicely. The solution is actually very simple, just unconventional.
The graphics are depth sorted via a TreeMap, and then rendered to the screen. One can simply traverse this TreeMap in reverse (and keep it until the next render cycle) to translate the cursor location to the proper image it falls over (by testing the pixels [in reverse render order] and checking if they are transparent.)
The solution is in the open-source project, under the io.github.jevaengine.world.World class, pick method. https://github.com/JeremyWildsmith/JevaEngine/blob/master/jevaengine/src/main/java/io/github/jevaengine/world/World.java
I've been trying various ways of creating a two-dimensional tile-based game for a few months now. I have always had each tile be a separate object of a 'Tile' class. The tile objects are stored in a two-dimensional array of objects. This has proven to be extremely impractical, mostly in terms of performance with many tiles being rendered at once. I have aided in this by only allowing tiles within a certain distance of the player being rendered, but this isn't that great either. I have also had problems with the objects returning a null-pointer exception when I try to edit the tile's values in-game. This has to do with the objects in the 2D array not being properly initialized.
Is there any other, simpler way of doing this? I can't imagine every tile-based game uses this exact way, I must be overlooking something.
EDIT: Perhaps LWJGL just isn't the correct library to use? I am having similar problems with implementing a font system with LWJGL... typing out more than a sentence will bring down the FPS by 100 or even more.
For static objects (not going anywhere, staying where they are) 1 tile = 1 object is OK. That's how it was done in Wolf3d. For moving objects you have multiple options.
You can, if you really really want to, store object sub-parts in adjacent cells/tiles when an object isn't contained fully within just one of them and crosses one or more cell/tile boundaries. But that may be not quite handy as you'd need to split your objects into parts on the fly.
A more reasonable approach is to not store moving objects in cells/tiles at all and process them more or less independently of the static objects. But then you will need to have some code to determine object visibility. Actually, in graphics the most basic performance problems come from unnecessary calculations and rendering. Generally, you don't want to even try to render what's invisible. Likewise, if some computations (especially complex ones) can be moved outside of the innermost loops, they should be.
Other than that it's pretty hard to give any specific advice given so little details about what you're doing, how you're doing it and seeing the actual code. You should really try to make your questions specific.
A two-dimensional array of Tile objects should be fine........ this is what most 2D games use and you should certainly be able to get good enough performance out of OpenGL / LWJGL to render this at a good speed (100FPS+).
Things to check:
Make sure you are clipping to only deisplay the visible set of tiles (According to the screen width and height and the player's position)
Make sure the code to draw each tile is fast... ideally you should be drawing just one textured square for each tile. In particular, you shouldn't be doing any complex operations on a per-tile basis in your rendering code.
If you're clever, you can draw multiple tiles in one OpenGL call with VBOs / clever use of texture coordinates etc. But this is probably unnecessary for a tile-based game.
I have made a GUI that has multiple components with it and now I need to populate those components with text.
My specifications:
Preferably uses LWJGL (OpenGL)
Isn't exceptionally complicated
Doesn't use any other external libraries (bar the LWJGL library)
Has to be fairly optimised, it will be used a lot on a very fps intensive GUI
Has the possibility of being anti-aliased?
Possibly have the ability to run on most major Operating Systems
How could I do this in java?
I would consider using SVG, and particularly Batik for managing the fonts. I am doing quite a lot with this myself and it's somewhat of a learning curve. I would be prepared to start simply and find out what particular features you need before worrying about performance. But until you give clearer ideas I'm not sure we can help much more
You have three options: Bitmap fonts, texture fonts and vector fonts.
Bitmap fonts are only useful if all you want to do is render text for a 2D GUI. However, you can't do antialiasing if you use Bitmap fonts. On the other hand, they're pretty easy to use and they're quick to render.
Texture fonts allow for antialiasing, but again they're best for 2D GUIs. If you want to render text in world space, you'll get lots of artifacts because of the texture scaling that's taking place. To use texture fonts, you have to create a texture atlas that contains an image for each character of a particular font that you want to use (usually you'll want to restrict the character set to ASCII, otherwise your texture will be too large). You can use AWT to create a rectangular image that contains all the characters you need. Then you can render a character by rendering a quad with the appropriate texture coordinates for that character. It is advisable to use a luminance alpha texture so that you can blend it with the color you want the text to be in. You can optimize this by using display lists for each character and possible for each string, but you'll run into problems with kerning etc.
Vector fonts give you the best results if you want to render your text in world space. They will give you perfect font rendering incl. kerning, but they're more expensive to render. My usual approach is to create a path (using AWT) for each string that I want to render, flatten that path and then trace it using the GLU tesselator. This will give you a bunch of triangles, triangle strips and triangle fans which you can put into a VBO for optimal performance. Then you can render that string by issuing the appropriate rendering commands for the VBO. You can optimize this further by using a display list for each string. That way, you will only have to send one command per string, but of course this will still be more expensive than the other methods.
I am writing a game on Android, and it is coming along well enough. I am trying to keep everything as efficient as possible, so I am storing as much as I can in Vertex Buffer Objects to avoid unnecessary CPU overhead. However the simple act of drawing lots of unrelated primitives, or even a varying length string of sprites efficiently (such as drawing text to the screen) is escaping me.
The purpose of these primitives is menus and buttons, as well as text.
For drawing the menus, I could just make a vertex array for each element (menu background, buttons, etc), but since they are all just quads, this feels very inefficient. I could also create a sort of drawQuad() function that lets me just transparently load a single saved vertex array with data for xy/height&width/color/texture/whatever. However, reloading each element of the array with the new coordinates and other data each time, to copy it to the Float Buffer (For C++ guys, this is a special step you have to do in Java to pass the data to GL) so I can resend it to the GPU also feels lacking in efficiency, though I don't know how else I could do it. (One boost in efficiency I could see is setting the quad coordinates to be a unit square and then using Uniforms to scale it, but this seems unscalable).
For text it is even worse since I don't know how long the text will be and don't want to have to create larger buffers for larger text (causing the GC to randomly fire later). The alternate is to draw each letter with a independent draw command, but this also seems very inefficient for even a hundred letters on the screen (Since I read that you should try to have as few draw commands as possible).
It is also possible that I am looking way too deep into the necessary optimization of openGL, but I don't want to back myself into a corner with some terrible design early on.
You should try looking into the idea of interleaving data for your glDrawArrays calls.
Granted this link is for iphone, but there is a nice graphic at the bottom of the page that details this concept. http://iphonedevelopment.blogspot.com/2009/06/opengl-es-from-ground-up-part-8.html
I'm going to assume for drawing your characters that you are specifying some vertex coords and some texture coords into some sort of font bitmap to pick the correct character.
So you could envision your FloatBuffer as looking like
[vertex 1][texcoord 1][vertex 2][texcoord 2][vertex 3][texcoord 3]
[vertex 2][texcoord 2][vertex 3][texcoord 3][vertex 4][texcoord 4]
The above would represent a single character in your sentence if you're using GL_TRIANGLES, and you could expand on this idea to have vertices 5 - 8 to represent the second character and so on and so forth. Now you could draw all of your text on screen with a single glDrawArrays call. Now you might be worried about having redundant data in your FloatBuffer, but the savings will be huge. For example, in rendering a teapot with 1200 vertices and having this redundant data in my buffer, I was able to get a very visible speed increase over calling glDrawArrays for each individual triangle maybe something like 10 times better.
I have a small demo on sourceforge where I use data interleaving to render the teapot I mentioned earlier.
Its the ShaderProgramTutorial.rar. https://sourceforge.net/projects/androidopengles/files/ShaderProgram/
Look in teapot.java in the onDrawFrame function to see it.
On a side note you might find some of the other things on that sourceforge page helpful in your future Android OpenGL ES 2.0 fun!
So, I'm creating a 2d top-down game in Java.
I'm following instructions from Java 2D: Hardware Accelerating - Part 2 - Buffer Strategies to take advantage of hardware acceleration.
Basically, what I'm thinking is this:
I'd like to be able to easily add more sections to the map. So I'd rather not go the route suggested in a few of the tutorials I've seen (each map tile has an adjacency list of surrounding tiles; beginning with a center tile, populate the screen with a breadth-first search).
Instead, my idea would be to have screen-sized collections of tiles (say 32x32 for simplicity), and each of these screen "chunks" would have an list referencing each adjacent collection. Then, I would create a buffer for the current screen and the 8 adjacent screens and draw the visible portion in the VRAM buffer.
My question is, would this be a correct way to go about this, or is there a better option? I've looked through quite a few tutorials, but they all seem to offer the same (seemingly high maintenance) options.
It would seem this would be a better choice, as doing things at the tile level would require 1024 times as many adjacency lists. Also, the reason I was considering putting only the visible portion in VRAM, while leaving the "current" screen and its adjacent screens in standard buffers was because I'm new to hardware acceleration and am not entirely sure how much space is acceptable to assume to be available. Because Java attempts to accelerate standard buffers anyways, it should theoretically be as fast as putting each in VRAM?
Any and all suggestions are welcome!
I haven't looked at any of the popular tile-based game engines, but I'd consider using the fly-weight pattern to render only the tiles that are visible in the viewport of a JScrollPane. JTable is both an example and a usable implementation.
Addendum: One advantage of the JTable approach is view-model separation, which allows one to relegate the acquisition of tile-related resources to the model. This makes it easier to optimize without having to change the view.
Even without scroll bars, one can leverage scrollRectToVisible() by extending JComponent or an appropriate subclass. The setDoubleBuffered() method may be helpful, too.