Using world coordinates - java

I am currently using pixels as units for placing objects within my world, however this can get tedious because I only ever place objects every 16 pixels. For example, I would like to be able to place an object at position 2 and have the object rendered at the pixel position 32.
I was wondering if the best way to do this is simply having a pixel-to-unit variable and multiplying/dividing based on what I need to be doing with pixels or if there is a better way.

You shouldn't use constant a pixel-to-unit conversion, as this would lead to different behavior on different screen sizes/resolutions.
Also don't forget about different aspect ratios, you also need to take care about them.
The way you should solve this problem is using Viewports.
Some of them support virtual screen sizes, which are what you are looking for. You can calculate everything in your virtual space, and Libgdx converts those things to pixels for you.
They also take care about different aspect ratios in different ways.
The FitViewport for example shows black borders, if the aspect ratio is not the same as the virtual one.
The StretchViewport instead of showing black borders stretches the image to fit the screen.

Related

Is there a way to draw Strings in libgdx without BitmapFonts?

I want to draw Strings in my Libgdx game but i cant use BitMap Fonts because the scale of my game is to smal to use them.
It sounds like you mean the scale of your viewport is too small to show fonts correctly. There are two solutions. The first is better for legibility while the second is quick and dirty.
One is to use a second viewport for the UI that has an appropriate scale for text. You would first call gameViewport.apply(), draw the game, and end the batch. Then use uiViewport.apply() and then draw the UI. The downside with this method would be if you want to draw text that aligns with moving objects in the game, you would have to use the two viewports to convert coordinates. Otherwise, this is the ideal method to get a crisp looking UI. Ideally you would use a ScreenViewport and select a font size at runtime based on the screen dimensions, either by shipping your game with multiple versions of the font at different scales, or by using FreeTypeFontGenerator.
The second method is to scale down all your text. First call bitmapFont.setUseIntegerPositions(false) do it won't round off positions to integers. Then call bitmapFont.setScale() with however much you want to shrink it to fit in your game viewport.
There is a gdx-freetype project:
https://www.badlogicgames.com/wordpress/?p=2300
and it uses TrueType fonts as source to generate bitmap font on the fly.
Not sure how stable this is - didn't use it.

Transparency issue with opengl/lwjgl

I am attempting to draw two textures to 3D space that containing transparency. When they do not overlap they work fine:
However when one texture overlaps the other the the transparency means that you can see through the one behind:
I use GL_SRC_ALPHA and GL_ONE_MINUS_SRC_ALPHA when initialising blending.
You need to either depth sort or use alpha testing:
glEnable(GL_ALPHA_TEST);
glAlphaTest(GL_GREATER, 0.0f);
which will only draw pixels that have an alpha value of more than 0f. However, this doesn't work for blending transparent pixels. Andon's solution is the one that I use, although I work in 2D and I have to have transparency for smoke effects.
One possibility is to use the discard keyword in the fragment shader, as the alpha test is no longer with us. This has the disadvantage of having aliased edges of objects.
Another possibility is to depth-sort the objects and draw back to front. Obvious disadvantage is having to perform the transformations and the sorting in the first place. This can be sometimes avoided if the order of the objects can be determined statically (when the camera doesn't change much). Another disadvantage is overdrawing of the shaded pixels by something different, therefore throwing away performance.
Finally, you can use alpha-to-coverage, where the antialiassing hardware is employed to take care of the transparency. This doesn't require sorting and makes the edges of the objects smooth. The disadvantage is that this is enabled per rendering context and may not be available everywhere.

Canvas/Stage Size in Flash is too small and cannot show entire level

This isn't directly a programing problem but I feel it still can fall under the catagory, I am sorry if this is the wrong place. I am making a game in flash using box2d and I decided to draw the levels in flash as the level design would look better, The levels are very large ( this level is 10,000 pixels long) and the canvas in flash just won't display anything.
The preview in the library seems to be able to display the drawing longer than the one on the stage. How do I go about making the canvas longer? Should I try upgrading to a newer version of flash, does that version allow this?
You just don't put everything at once over your canvas, instead draw only those level primitives or parts that are visible right now. Or, if your level is basically a pretty simple shape, you can just change its X and Y so that the relevant part of the level is displayed on stage.
Don't use giant bitmaps - they use a lot of memory, and even if not all of the content is visible, they will degrade performance considerably. For this reason, Flash imposes a size limit of 4095x4095 pixels (or an equal amount of pixels in rectangular formats).
The way to deal with this is to tile your graphics into parts of equal size, preferably smaller than the stage (1/2 or 1/3 side length is a good measure). You then place them all as a grid into a larger Sprite or MovieClip and set visible=false; on each tile. Then, at runtime, your game loop must check for each frame, which of the tiles should actually appear on the stage - and only those should then be set to visible=true;. That way, you reduce the amount of pixels drawn to what is absolutely necessary, and keep screen memory usage to a minimum.

Stitch grid of 375x375 images together into one

I have a map, divided into 375x375 tiles of 16 pixels. I want to develop a java application to stitch those images together into one big image. How would I go about doing this in java? Any useful libraries?
Create a BufferedImage that is 375*16 or 6000x6000px. For a 36 MPix image, you will need a lot of memory.
Get a Graphics instance from the image.
Loop through the tiles and call g.drawImage(tile, x, y)
Dispose of the graphics instance.
Of course, it might make more sense (and would take a lot less memory) to draw the tiles that are within view, directly to the rendering surface of the game (if that is the end purpose).
Any useful libraries?
Overkill for this. Using either technique outlined above, it would only take a couple of lines of code.

JOGL: How can I draw many strings quickly

I'm using JOGL (OpenGL for Java) for my application and I need to draw tons of strings on screen at once and my current solution is far too slow. Right now I'm drawing the strings using TextRenderer using the draw3D method and for even a moderate number of strings (around 300-500), it just kills the FPS. I started messing with drawing text onto the object textures, which is much faster, but there are a few problems with it. The first is that allocating all those textures requires a lot of memory. The second is that I need to find a way to size the texture so its only as big as the string and then map it to the object without stretching. The problem there is that all these thousands of boxes are using a single model being rendered with a call list. I'm not sure its possible to change the texture mapping for each object in that situation.
I don't mind if the text appears flat or 3D, it just has to be positioned in 3D space. I would prefer to render the text in the highest quality possible without sacrificing too much speed, since readability of the text is the most important part of the application. Also, nearly all of the strings are different, there aren't many duplicates.
So, my question: Am I going down the right path with drawing the strings on the textures, and if so, how can I overcome those 2 problems? Or is there another method that would suit my needs?
Depending on exactly how TextRenderer works - you might be able to use display lists to batch up your text drawing commands.
If TextRenderer works by having a texture of individual character glyphs and piecing together a string a glyph at a time: it'll be fine. just bookend your text drawing code with glNewList and glEndList. Once a list is defined, just use glCallList to use it.
If however, TextRenderer works by drawing complete strings into a texture and using one quad per string - display lists may not work. If the strings in one batch do not all fit within TextRenderer's cache, it will delete the least-recently used one to reclaim some space. Display lists will only recreate the OpenGL calls made, and so the work done by TextRenderer to update the string cache texture will be lost and you'll get incorrect output. From a quick scan of the source, I suspect that TextRenderer works in this manner.
To summarise: Display lists will greatly speed up your rendering, but will only if you don't overflow TextRenderer's string cache texture and don't use the TextRenderer after the display list has been defined.
If you can't meet these constraints you're going to have to go a bit hardcore and write your own text renderer that renders glyph-by-glyph - it'll then be trivial to cache the output geometry and extremely quick to re-render. There's an example of such a system here, with the tool to create a font here. It uses LWJGL rather than JOGL, but the translation between the two will be the least of your worries if you want to integrate it - it's meshed with the texture management etc.

Categories