Alright, I couldn't find a good name for this, so I will explain in a bit further detail.
I am making a game using LWJGL and I have gotten some basic rendering done, but now I want to do something a bit more advanced.
Here is the situation:
I have a mesh (positions, normals, texture coords, indices) I generate which can currently support 1 texture, this would be great if I had a single image containing all of the textures, but sadly that isn't the case. I have a individual image for each texture which needs to be loaded individually.
Now, I see a way how I could do this, but it doesn't seem practical or like a good usage of memory.
-Load all the textures into one image and save where each one is in that image for usage with the texture coords.
The textures should NOT blend together, hard coding anything is not an option as I wish to allow modding to be easy to implement, and anywhere from 1 (best case scenario) to 65,536+ textures (worst case scenario) are able to be used in the same "mesh".
I am simply going to use a Texture Atlas as doing anything else seems impractical. Thanks #httpdigest for the suggestion.
Related
Similar to the game Factorio im trying to create "3D" terrain but of course in 2D Factorio seems to do this very well, creating terrain that looks like this
Where you can see edges of terrain and its very clearly curved. In my own 2D game Ive been trying to think of how to do the same thing, but all the ways I can think of seem to be slow or CPU intensive. Currently my terrain looks like this:
Simply 2D quads textured and drawn on screen, each quad is 16x16 (except the water thats technically a background but its not important now), How could I even begin to change my terrain to look more like a Factorio or other "2.5D" games, do they simply use different textures and check where the tile would be relative to other tiles? Or do they take a different approach?
Thanks in advance for your help!
I am a Factorio dev but I have not done this, so I can only tell you what I know generally.
There is a basic way to do it and then there are optional improvements.
Either way you will need two things
Set of textures for every situation you want to handle
Set of rules "local topology -> texture"
So you have your 2d tile map, and you move a window across it and whenever it matches a pattern, you apply an appropriate texture.
You probably wouldn't want to do that on the run in every tick, but rather calculate it all when you generate the map (or map segment - Factorio generates new areas when needed).
I will be using your picture and my imba ms paint skills to demonstrate.
This is an example of such rule. Green is land, blue is water, grey is "I don't care".
In reality you will need a lot of such rules to cover all cases (100+ I believe).
In your case, this rule would apply at the two highlighted spots.
This is all you need to have a working generator.
There is one decision that you need to make here. As you can see, the shoreline runs inside the tile, not between tiles. So you need to chose whether it will run through the last land tile, or the last water tile. The picture can therefore be a result of these two maps (my template example would be from the left one):
Both choices are ok. In fact, Factorio switched from the "shoreline on land" on the left to the "shoreline on water" on the right quite recently. Just keep in mind that you will need to adjust the walking/pathfinding to account for this.
Now note that the two areas matched by the one pattern in the example look different. This can be a result of two possible improvements that make the result nicer.
First is that for one case you can have more different textures and pick a random one. You will need to keep that choice in the game save so that it looks the same after load.
Another one is more advanced. While the basic algorithm can already give you pretty good results, there are things it can't do.
You can use larger templates and larger textures that span over several tiles. That way you can draw larger compact pieces of the terrain without being limited by the fact that all the tiles need to be connectable to all (valid) others.
The example you provided are still 2D textures (technically). But since the textures themselves are 'fancy 3D', they appear to be 3D/2D angled.
So your best bet would be to upgrade your textures. (and add shadow to entities for extra depth).
Edit:
The edges you asked about are probably layed-out by checking if a 'tile' is an edge, and if so it adds an edge-texture on top the background. While the actual tile itself is also a flat image (just like the water). Add some shadow afterwards and the 3D illusion is complete.
I hope this answers your question, otherwise feel free to ask clarification.
Note: I am new to LibGdx with android.
I am Creating a clone of the famous game Color Switch for practice purposes.
I have used ShapeRenderer for creating the ball and moving circles.
After reading more about ShapeRenderer, I realized it is mostly used for debugging purposes, so I searched for other alternatives to it and I got to know about Pixmaps but I am stuck on how to use it and cannot find any source.
So I want to know if there any other alternatives to ShapeRenderer or any sources for Pixmaps to get started?
ShapeRenderer is perfectly fine for your use-case. So, unless you have an actual problem with it, there is no need to switch to something else.
The reason that ShapeRenderer is primarily used for debugging is because it's limited to basic shapes and colors. In many use-cases this is insufficient and e.g. images (like .png files) are used instead. In those cases SpriteBatch is typically used to draw the images, which is optimized for (rectangular) images instead of shapes.
A Pixmap is the raw image data in CPU memory (RAM), which is practically an interim step between the image on disk (e.g. the .png file) and the image (Texture) in GPU memory (VRAM). LibGDX handles this interim step for you, so you should never have to deal with Pixmap in most use-cases yourself. Manually manipulating a Pixmap is very costly operation and worse than using ShapeRenderer for your use-case.
I was searching for an anti-aliasing algorithm for my OpenGL program (so I searched for a good shader). The thing is, all shaders want to do something with the textures, but I dont use textures, only colors. I looked at FXAA most of the time, so is there a anti-aliasing algorithm that just works with colors? My game, what this is for looks blocky like minecraft, but only works with colors and cubes of different size.
I hope someone can help me.
Greetings
Anti-aliasing has nothing specifically to do with either textures or colors.
Proper anti-aliasing is about sample rate, which while highly technical can be thought of as doing extra work to make a better educated guess at some value that cannot be directly looked up (e.g. a pixel that is only partially covered by a triangle).
Multisample Anti-Aliasing (MSAA) will work nicely for you, it will only anti-alias polygon edges and does nothing for texture aliasing on the interior of a polygon. Since you are not using textures you do not need to worry about aliasing inside a polygon.
Incidentally, FXAA is not proper anti-aliasing. FXAA is basically a shader-based edge detection and blur image processing filter. FXAA will blur any part of the scene with sharp edges, whether it is a polygon edge or an edge due to a mapped texture. It indiscriminately blurs anything it thinks is an aliased edge and gets this wrong often, resulting in blurry textures.
To use MSAA, you need:
A framebuffer with at least 2 samples
Enable multisample rasterization
Satisfying (1) is going to depend on what you used to create your window (in this case LWJGL). Most frameworks let you select the sample count as one of the parameters at the time of creation.
Framebuffer Objects can also be used to do this without messing with your window's parameters, but they are more complicated than need be for this discussion.
(2) is as simple as calling glEnable (GL_MULTISAMPLE).
I'm developing an Android game using Java, and I am currently on trying to figure out an efficient way of rendering the necessary textures. Suppose you have a Grid, similar to a Checkers board layout, and Tiles to fill that grid, as in each square on the board. That is the concept of what will be displayed. Currently, I am drawing each tile one by one. All of the texture loading is done upon creation and is only done once, not upon drawing. Now, for what I want to do. I've noticed that drawing all one by one, although fast for what I'm doing, it can be glitchy. In my game, the user has the ability to drag the "board" to view different areas. As of right now, I'm only allowing the necessary tiles to be drawn depending on the location of the top left visible tile. As I said, it works quite fast, but, once the user starts interacting more or dragging faster, the rendering starts to have difficulties and isn't as fast as it should be. This causes little separation in between the tiles. It's not large, just large enough to be noticeable. What I want to do is to basically place each texture in a certain location as defined by the grid, thus creating a new texture containing the viewable area, and then render that entire area as opposed to render each tile separately. I've done a lot of research and already looked at many questions, but I still have not found something that will help my cause. I've read that rendering to texture using a framebuffer may help, but I haven't found any easy-to-follow tutorials or examples, just a lot "here's the code, no explanation" or "here's something similar to what you want, but it's using different things." So, if someone could point me towards a good tutorial/example, or post a valuable answer, I would be very grateful. I'm avoiding OpenGL ES 2.0 because I want my game to be compatible with many devices, and for what I'm doing, 2.0 is not necessary.
For a quick summary of what my code does for further explanation:
for(go through visible rows){
for(go through visible columns){
drawTile(); //Does the texture binding and drawing for each tile
}
}
What I want:
for(go through visible rows){
for(go through visible columns){
loadTileTextureIntoGridTexture();
//I want it to combine the textures into one texture
}
}
drawGridTexture();
Doing it the second way will only have one whole texture to render as opposed to visibleRows*visibleColumns textures to render.
I'm writing a game in Java, LJGWL (OpenGL). I'm using a library that handles a lot of messy details for me, but need to find a lot faster way to do this.
Basically I want to set every pixel on the screen to say a random color as fast a possible. The "random colors" is just an Array [][] that gets updated every 2-3 seconds. I've tried drawing rects and using images, both are pretty slow for what I want to do.
I think I want to learn how to write a GPU shader? That is the fastest way to do this? LJGWL exposes OpenGL api to java. Any basic tutorials on how to get started with OpenGL shaders? Or should I dynamically create a texture of some sort and then just throw up the entire texture, would that be faster?
If it were the case that you were statically displaying the same image, than using a texture or display list would suffice. But as you want to frequently update it, shaders really are the best option. Shader code executes on the GPU and modifies data in GRAM, so you have no bottle neck transferring from CPU to GPU. The next best thing would probably be a Pixel or Frame Buffer Object. Buffer Objects let you read/write to GRAM via DMA (without having to go through the CPU) so they can be pretty fast.
I haven't written any shaders yet, so I can't recommend any good resources. But SongHo's OpenGL pages are a good place to learn about Buffer Objects. (His examples are in C++ though)
Textures are the fastest way to draw something on screen, draw a texture mapped quad into the screen, it should be fast enough. When you need to reupload the texture data, use glTexSubimage2D to update it.
No need to use shaders.
I've yet to do any work with shaders in OpenGL, but given the same scenario in multiple occasions, I handled it with a texture I threw up across the screen on top, and it worked quite effectively.
I don't know how you are drawing your pixels exactly, but this limit you hit could be because of the amount of data you transfer (inefficiently?). Updating a screen full of pixels every 2-3 seconds shouldn't be hard at all. Although shaders bring you closer to the graphics card, they will never make inefficient methods fast, so...
Why is your code so slow?
What code? What code exactly did you try? What texture did you use, render to, ...?
Is it slow? How slow? How fast do you expect it to be?
How quickly can one get 1920x1080(?) pixels in video ram, what's your hardware, drivers, OS?
I think you need to edit/repost before we can help you solve your problem. Just because it is slow, is no guarantee at all that shaders will even be one bit faster.