I have a simple 16 x 16 grid to which I apply a single texture. The texture file is divided into four parts - each a different color. By default each square is colored green (upper left part of the file). If you click a square I apply the red portion (upper right part of the file). Now I want to make the square disappear entirely when clicked. I suppose I can use a transparent texture but I was hoping I wouldn't have to so as to avoid loading / reloading two different texture files.
Here is the code I use to update the texture vbo:
//I don't bother offsetting my changes. I simply update the 'UVs'
//array and then copy the entire thing to the floatbuffer.
public void updateTexture(int offset)
{
//populate the texture buffer
//fbtex is a floatbuffer (16x16x8 in size). UVs is an array, same size.
fbtex.put( UVs );
fbtex.rewind();
glBindBuffer(GL_ARRAY_BUFFER, vboHandles.get(TEXTURE_IDX)); //the texture data
glBufferSubData(GL_ARRAY_BUFFER, offset, fbtex);
fbtex.clear(); //don't need this anymore
}
The VBO will contain up to 256 instances of my co-ords for Green:
public float[] UV_0 = { 0.02f,0.02f,
0.02f,0.24f,
0.24f,0.24f,
0.24f,0.02f};
or less if includes a few of my co-ords for Red:
public float[] UV_1 = { 0.24f,0.02f,
0.48f,0.02f,
0.48f,0.24f,
0.24f,0.24f};
Is there anything I can do to the VBO data to draw a section invisible? So the objects in the background can be seen for example?
You can just not render parts of the VBO. Normally you draw the entire data with something like
glDrawArrays(GL_TRIANGLES, 0, numElements);
glDrawArrays takes a first and count parameter which you can use to render only a part of the VBO. So if you wanted to not render some data, you would render all data before and then after this data in two draw calls.
If you create an RGBA format texture rather than RGB, just make one part within the texture transparent. (Alpha/opacity zero.) Then you just need to update the texture coord VBO with coords for the transparent square just like any other.
Or if the colors are flat with no gradients or patterning, just a single RGB value for each grid square, why use a texture at all? Change your "UV" buffer into an "RGBA" buffer and just set the color at each vertex to red/green/transparent.
Related
Im using processing with java. I have a transparent background png drawing I made, it sort of looks like an abstract leaf, sort of like matisse. I know how to create shapes with random colors chosen from an array, so i can display that shape with different background colors for each frame in a loop, saving each. what I want to do next is create another layer over the drawing that is populated with a random color from my array, but to have that layer display only on the pixels from the underlying png that is loaded.
the end result is the ability to put out an endless number of randomly colored versions of that leaf design, with a random background color. I just havent figured out how to create this kind of clipping mask effect.
You can use the leaf image as a mask for the dynamically colored object. This will apply the alpha channel (transparency) to the masked image. mask operates on images, so you’ll need to draw your color fill to a PGraphics or PImage and apply the mask to that image.
Depending on the specifics of the effect you’re trying to achieve, you might also be able to simply apply a color tint to the leaf image to change to the desired color.
I want to develop a simple 2D side scrolling game using libGDX.
My world contains many different 64x64 pixel blocks that are drawn by a SpriteBatch using a camera to fit the screen. My 640x640px resource file contains all these images. The block textures are positioned at (0, 0), (0, 64), (64, 0), ... and so on in my resource file.
When my app launches, I load the texture and create many different TextureRegions:
texture = new Texture(Gdx.files.internal("texture.png"));
block = new TextureRegion(texture, 0, 0, 64, 64);
block.flip(false, true);
// continue with the other blocks
Now, when I render my world, everything seems fine. But some blocks (about 10% of my blocks) are drawn as if the TextureRegion's rectangle was positioned wrong - it draws the bottommost pixel row of the above (in the resource texture) block's texture as its topmost pixel row. Most of the blocks are rendered correctly and I checked that I entered the correct position multiple times.
The odd thing is, that when I launch the game on my computer - instead of my android device - the textures are drawn correctly!
When searching for solutions, many people refer to the filter, but neither of both Linear and Nearest works for me. :(
Hopefully, I was able to explain the problem in an accessible way and you have any ideas how to fix that (= how to draw only the texture region that I want to draw)!
Best regards
EDIT: The bug does only appear at certain positions. When I draw two blocks with the same texture at different positions, one of them is drawn correctly and the other is not.. I don't get it....
You should always leave empty space between your images when packing into one texture, because if you use FILTER_LINEAR (which I think is default) for every pixel it will sample from the four nearest pixels. And if your images are without empty pixels padding,for all edge pixels it will get pixels from the neighbor image.
So three options to solve your issue:
Manually add space between images in you texture file
Stop using FILTER_LINEAR (but you will get ugly results if you are not drawing in the native image dimentions e.g. scaling the image)
Use the Libgdx Texture Packer, it has a build it functionality to do just that, when you pack your images
is there a way to combine a number of textures into one texture with libgdx API?
For example, consider having these 3 textures:
Texture texture1 = new Texture("texture1.png");
Texture texture2 = new Texture("texture2.png");
Texture texture3 = new Texture("texture3.png");
The goal is to combine them into one usable texture, is there a way to do it?
I wanted to get the same result, but I did not find the previous instructions given to be of any help. Neither I was able to find any other clear solution for this issue, so here's what I managed to get working after I read a bit of api-docs and did some testing of my own.
Texture splashTexture = new Texture("texture1.png"); // Remember to dispose
splashTexture.getTextureData().prepare(); // The api-doc says this is needed
Pixmap splashpixmap = splashTexture.getTextureData().consumePixmap(); // Strange name, but gives the pixmap of the texture. Remember to dispose this also
Pixmap pixmap = new Pixmap(splashTexture.getWidth(), splashTexture.getHeight(), Format.RGBA8888); // Remember to dispose
// We want the center point coordinates of the image region as the circle origo is at the center and drawn by the radius
int x = (int) (splashTexture.getWidth() / 2f);
int y = (int) (splashTexture.getHeight() / 2f);
int radius = (int) (splashTexture.getWidth() / 2f - 5); // -5 just to leave a small margin in my picture
pixmap.setColor(Color.ORANGE);
pixmap.fillCircle(x, y, radius);
// Draws the texture on the background shape (orange circle)
pixmap.drawPixmap(splashpixmap, 0, 0);
// TADA! New combined texture
this.splashImage = new Texture(pixmap); // Not sure if needed, but may be needed to get disposed as well when it's no longer needed
// These are not needed anymore
pixmap.dispose();
splashpixmap.dispose();
splashTexture.dispose();
Then use the splashImage (which is a class variable of type Texture in my case) to render the combined image where you want it. The resulting image has the given background and the foreground is a png image which has transparent parts to be filled by the background color.
#Ville Myrskyneva's answer works, but it doesn't entirely show how to combine 2 or more textures. I made a Utility method that takes 2 textures as input and returns a combined texture. Note that the second texture will always be drawn on top of the first one.
public static Texture combineTextures(Texture texture1, Texture texture2) {
texture1.getTextureData().prepare();
Pixmap pixmap1 = texture1.getTextureData().consumePixmap();
texture2.getTextureData().prepare();
Pixmap pixmap2 = texture2.getTextureData().consumePixmap();
pixmap1.drawPixmap(pixmap2, 0, 0);
Texture textureResult = new Texture(pixmap1);
pixmap1.dispose();
pixmap2.dispose();
return textureResult;
}
You should use the TexturePacker for creating one big Texture from lots of small one. https://github.com/libgdx/libgdx/wiki/Texture-packer .
And here is GUI version (jar) whcih will help you to pack all your textures in one before putting them to your project: https://code.google.com/p/libgdx-texturepacker-gui/.
You should also remember that maximum size of one texture for most Android and IOS devices is: 2048x2048.
In case you want to use combined texture as atlas for performance benefits, you better use #Alon Zilberman answer.
But I think that your question is about another problem. If you want to get some custom texture from these three textures (e.g. merge them by drawing one at one), then your best solution is to draw to FrameBuffer. Here you can find nice example of how to do it https://stackoverflow.com/a/7632680/3802890
While working on Projectiles I thought that it would be a good idea to rotate the sprite as well, to make it look nicer.
I am currently using a 1-Dimensional Array, and the sprite's width and height can and will vary, so it makes it a bit more difficult for me to figure out on how to do this correctly.
I will be honest and straight out say it: I have absolutely no idea on how to do this. There have been a few searches that I have done to try to find some stuff, and there were some things out there, but the best I found was this:
DreamInCode ~ Rotating a 1-dimensional Array of Pixels
This method works fine, but only for square Sprites. I would also like to apply this for non-square (rectangular) Sprites. How could I set it up so that rectangular sprites can be rotated?
Currently, I'm attempting to make a laser, and it would look much better if it didn't only go along a vertical or horizontal axis.
You need to recalculate the coordinate points of your image (take a look here). You've to do a matrix product of every point of your sprite (x, y) for the rotation matrix, to get the new point in the space x' and y'.
You can assume that the bottom left (or the bottom up, depends on your system coordinate orientation) of your sprite is at (x,y) = (0,0)
And you should recalculate the color too (because if you have a pure red pixel surrounded by blue pixel at (x,y)=(10,5) when you rotate it can move for example to (x, y)=(8.33, 7.1) that it's not a real pixel position because pixel haven't float coordinate. So the pixel at real position (x, y)=(8, 7) will be not anymore pure red, but a red with a small percentage of blue)... but one thing for time.
It's easier than you think: you only have to copy the original rectangular sprites centered into bigger square ones with transparent background. .png files have that option and I think you may use them.
I'm new to OpenGL. I'm using Java with LWJGL and Slick. So far I can draw several textures and with buffers I can copy the images shown on the screen to a texture for postprocessing purposes.
Using glColor3f() I can make the screen have the desired color (just blue, just red, only show blue and green channels, etc).
But what glColor3f(r, g, b) only does is multiply the values of r, g, b to the current pixels. If R, G, B are the current values of the pixel, what glColor does is (R*r, G*g, B*b) for all pixels.
What I want to do is use the current values R, G, B, so that for example I can swap the red channel for the blue Channel:
(B, R G)
Or use these values and new ones for arithmetic operations
(R+G*r, (B+g)/5, B*0.2)
My purpose is to make a grayscale texture, changing all pixels color (R, G, B) with
color = (R*0.5 + G*0.5 + B*0.5)/3
functionThatChangesColor(color, color, color);
How can I achieve this or something like it?
Thanks!
Use a shader implementing a component swizzle. With later versions of OpenGL you also can set component swizzling for a texture as texture parameters. See http://www.opengl.org/wiki/Texture#Swizzle_mask
Swizzle mask
While GLSL shaders are perfectly capable of reordering the vec4 value returned by a texture function, it is often more convenient to control the ordering of the data fetched from a texture from code. This is done through swizzle parameters.
Texture objects can have swizzling parameters. This only works for textures with color image formats. Each of the four output components, RGBA, can be set to come from a particular color channel. The swizzle mask is respected by all read accesses, whether via texture samplers or Image Load Store.
To set the output for a component, you would set the GL_TEXTURE_SWIZZLE_C texture parameter, where C is R, G, B, or A. These parameters can be set to the following values:
GL_RED: The value for this component comes from the red channel of the > image. All color formats have at least a red channel.
GL_GREEN: The value for this component comes from the green channel of > the image, or 0 if it has no green channel.
GL_BLUE: The value for this component comes from the blue channel of > the image, or 0 if it has no blue channel.
GL_ALPHA: The value for this component comes from the alpha channel of > the image, or 1 if it has no alpha channel.
GL_ZERO: The value for this component is always 0.
GL_ONE: The value for this component is always 1.
You can also use the GL_TEXTURE_SWIZZLE_RGBA parameter to set all four at > once. This one takes an array of four values. For example:
//Bind the texture 2D.
GLint swizzleMask[] = {GL_ZERO, GL_ZERO, GL_ZERO, GL_RED};
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, swizzleMask);
This will effectively map the red channel in the image to the alpha > channel when the shader accesses it.