I want to have a global gradient that goes all the way across the window but it only appears in certain objects with OpenGL. It's a bit like using a 'Clipping Mask' in Photoshop, here is an example on what I am trying to achieve.
(by the way I am doing this with LWJGL in Java, but that shouldn't affect to much.)
Have the texture coordinates aligned with the screen coordinates of your objects. That will cause the texture to be mapped as you want to.
Related
I am very confused about all the different coordinate systems.
I am using LibGDX with Tiled.
These all have their own coordinate system (sortof).
LibGDX screen
Tiled map
UIcamera
Orthogonal TiledMapCamera
UIStage
TiledMapStage
It's too many concepts and I can no longer mentally understand how they affect each other in complex scenarios, like
having different screen dimensions than the tiled map dimensions
when resizing the screen.
Can someone shed some light on this?
Many thanks!
In a 2D game, you really only have to think about the coordinate system of the orthographic camera. Whatever is drawn with a certain camera's combined matrix is fit to the rectangle of the screen (and if you set up the camera correctly, it will not be distorted).
LibGDX provides the Viewport classes for helping to set up your camera. You can think of them as camera managers that will size the camera to meet the arrangement you want. You instantiate them with a desired size "window" you want to see of the game world. And the only place you have to consider the actual screen dimensions is in the resize method, where you pass the dimensions to the Viewport class and let it handle sizing your camera for you so the scene won't be distorted.
You might have more than one camera. Typically your UI will have its own, and the gameplay world will have another (because you want it to move around in the world).
When it comes to input, the raw X and Y are given in screen pixel coordinates, but you just pass these coordinates to the camera.unproject method to have them converted to the same coordinates as your game world.
I don't use tiles, so I can't get specific there, but the same principles should apply.
I am creating a 2D game with lwjgl and slick-util. For a special feature in my game I wanted to be able to give textures a certain opacity. I have managed to figure this out but the next step is giving a Texture as a paramter which will give me the ability to give certain textures certain opacities in certain spots.
Note: I have gotten it sort of working before, but the mask also seemed to remove my background image, which I do not want.
I cannot post images because I dont have enough reputation or something but anyway what I want to basically do is:
first render a background image.
then render another images on top with a mask on it, I do not want this mask to apply onto the background.
How would I go about doing this?
I think you use wrong blending mode. If you did not change default blending mode, then you need:
glEnable (GL_BLEND); // Enable blending.
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); // Setup blending function.
I think in you case, texture does not blend with background, it simple replaces background.
What is the easiest way to give 2D texture in OpenGL (lwjgl) some kind of "Thickness". Of course i could get the border of the texture somehow and add Quads, orriented by the normal of the quad that the texture is drawn on, in the color of the adjacent texture pixel. But there has to be an easier way to do it.
Minecraft is using lwigl as well and there are the (new) 3D Items, that spin down on the ground and don't cause as much of a performance issue, as is if they were drawn of dozends of polygons. As well, when you hold an item in your hand, there is that kind of "stretched" Texture in depth, which also works with high resolution textures.
Does anyone know how that is done?
A 2D texture is always infinitely thin. If you want actual thickness (when you look edge onto it) you need geometry. In Minecraft things look blocky, because they've been modeled that way.
If you look at some angle and ignore the edges you can use parallax mapping to "fake" some depth in the texture. Or you can use a depth map and use a combination of tesselation shaders and vertex shaders to implement a displacement map, that generates geometry from the texture.
I need to capture more pixels than the width of the screen contains to save a higher res image. I figure the only two options are to pack more pixels into the screen with some Matrix command, or to make the actual view larger than the screen (which I don't think is possible.) I should probably make it known that I'm using OpenGL ES 2. Any help?
The technique you're looking for is called Render to Texture. Essentially you create an offscreen framebuffer, and redirect your draw calls to this framebuffer instead of the default.
You can make your framebuffer as big as you want (within hardware limitations).
This looks like a reasonable example:
http://blog.shayanjaved.com/2011/05/13/android-opengl-es-2-0-render-to-texture/
I am generating very large hex grids (up to 120k total hexes at 32px wide hexes results in over 12k wide images) and I'm trying to find an efficient way to bind these to OpenGL textures in libgdx. I was thinking of using multiple FBOs and breaking the grid up as necessary into tiles, but I'm not sure how to ensure continuity between the FBOs. I can't start with one massive FBO, because that is backed up by a texture so it would fail from trying to load it to video memory. I can't use a standard bitmap on the heap because I need the drawing functionality of an OpenGL surface.
So what I was thinking was I would need to overdraw on the FBOs and somehow pick up on the next FBO exactly where the previous left off. However I'm not sure how to go about this. I'm drawing the hex grid with a series of hexagonal meshes, FYI.
Of course, there's probably some other much simpler and more efficient way to do this that I'm not even thinking of, which is why I pose this question to you fine people!
You have to draw it in pieces. You need to be able to draw your hex grid from an arbitrary position. This means being able to compute which hexes to draw based on a rectangle overlaid over the map. This isn't a hard problem, and I wouldn't worry too much about drawing extra stuff off-screen. You should master this ability to view the hexmap from any position before moving on.
Once you've mastered that, it's really simple.
Draw the top-left corner and store the pixel data. Then move the area you're drawing over exactly one image width. Draw and store that. Move the area over one image width. Draw and store it. Keep doing that until you've covered the entire width.
Move down one image height and repeat the process. Once you've run out of width and height, you're done. Save your mega-huge image.
You don't need FBOs for this. You could draw it to the screen if you wanted. Though if you want maximum performance, I would suggest using FBOs, double buffering them, and using glReadPixels though a pixel buffer object. That should cut down a lot on latency.