I have one problem with openGL frame buffer.
I want to achieve:
Bind FBO1. Draw image1 to FBO1. Unbind FBO1.
Bind FBO2. Draw FBO1 to FBO2. Unbind FBO2.
Bind FBO1. Draw image2 to FBO1. Unbind FBO1.
Draw FBO2 to the screen
What I expect to see:
I expect to see only image1 on the screen, because it is drawn in the first FBO and then the first FBO is drawn onto the second FBO
What is the problem:
I am seeing both image1 and image2 on drawn, which should be imposible, because only the first image is drawn in FBO1 when FBO1 is drawn in FBO2 and I only draw FBO2 to the screen.
Here is the code to reproduce the problem:
// --- in show() method
this.img = new Texture(Gdx.files.internal("shaders/image.jpg"));
this.img2 = new Texture(Gdx.files.internal("shaders/image2.jpg"));
this.fbo = new FrameBuffer(Format.RGBA8888, Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), false);
this.fbo2 = new FrameBuffer(Format.RGBA8888, Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), false);
// --- in render() method
// start frame buffer 1 and draw image 1
this.fbo.begin();
{
this.batch.begin();
this.batch.draw(this.img, 0, 0, 400, 400, 0, 0, this.img.getWidth(), this.img.getHeight(), false, true);
this.batch.end();
}
this.fbo.end();
// start frame buffer 2 and frame buffer 1 output
this.fbo2.begin();
{
this.batch.begin();
this.batch.draw(this.fbo.getColorBufferTexture(), 500, 0, this.fbo.getColorBufferTexture().getWidth(), this.fbo.getColorBufferTexture().getHeight(), 0, 0, this.fbo.getColorBufferTexture()
.getWidth(), this.fbo.getColorBufferTexture().getHeight(), false, true);
this.batch.end();
}
this.fbo2.end();
// start frame buffer 1 again and draw image 2
this.fbo.begin();
{
this.batch.begin();
this.batch.draw(this.img2, 150, 150, 400, 400, 0, 0, this.img2.getWidth(), this.img2.getHeight(), false, true);
this.batch.end();
}
this.fbo.end();
// draw frame buffer 2 to the batch
this.batch.begin();
this.batch.draw(this.fbo2.getColorBufferTexture(), 0, 0);
this.batch.end();
The draw() methods are a bit long, because I want to pass flipY = true, because OpenGL draws the frame buffers upside-down.
The parameters of the draw() method that I am using are:
Texture texture, float x, float y, float width, float height, int srcX, int srcY, int srcWidth, int srcHeight, boolean flipX, boolean flipY
What am I missing? Why is this happening?
Each time you begin drawing on another FrameBuffer, you need to clear it if you don't want the old contents.
frameBuffer.begin();
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
//...
You also need to clear the screen at the beginning of render() if you don't want the contents from the last call to render().
Also, if you are covering the whole background with your image, you might as well disable blending on the SpriteBatch.
Related
I just want the shadow of an empty box. But if I give the Rect a transparent Paint color the shadow becomes transparent too. Is this possible?
Paint paint = new Paint();
//paint.setColor(0x00000000);
paint.setShadowLayer(10, 0, 0, Color.BLACK);
Rect rect = new Rect(0, 0, 100, 100);
canvas.drawRect(rect, paint);
I ended up creating a second Canvas and a second Bitmap and then merge them after I cut out some parts with PorterDuff.Mode.CLEAR. This way I create layers like in image editing applications and don't lose the background.
This is how I cut parts out:
Paint paint = new Paint();
paint.setXfermode(new PorterDuffXfermode(PorterDuff.Mode.CLEAR));
Rect rect = new Rect(left, top, right, bottom);
canvas.drawRect(rect, paint);
And this is how I put the layers back together:
bottomCanvas.drawBitmap(topBitmap, 0, 0, null);
How can I draw a filled oval of color red on a Graphics object created with BufferedImage which is filled with the color black?
What I have tried:
public void draw(){
BufferedImage bufferedImage = new BufferedImage(4, 5, BufferedImage.TYPE_INT_ARGB);
Graphics g = bufferedImage.createGraphics();
g.setColor(Color.BLACK);
g.fillRect(0, 0, 4, 5);
g.setColor(Color.RED);
g.fillOval(1, 1, 2, 2);
g.dispose();
}
The result is a filled red rectangle in a filled black rectangle:
But I want that filled red rectangle to be a filled red oval. How can I do that?
I want to use that image as a mouse cursor.
It looks like a red rectangle because it is only 1 pixel tall and 2 pixels wide. Since there isn't enough space to simulate a curve, you won't get one. Try a bigger oval like g.fillOval(100, 100, 200, 200);
I want to develop an application that takes a photograph and allows you to select a frame for photography.
I already have developed the interfaces, like taking the photo and storing it.
But I put a frame to the photo taken I could not.
Any recommendation?
In what format do you store taken photos and your frame images? If you plan to insert your picture into a simple frame, I'd suggest to first draw your taken picture on a Canvas, and then draw your frame over it. Keep in mind the sizing - you don't want your picture be too small or too big.
I'm providing an example snippet here:
public Bitmap Output(Bitmap takenPhoto, Bitmap frameImage)
{
int width, height, x, y;
height = ### // your desired height of the whole output image
width = ### // your desired width of the whole output image
x = ### // specify indentation for the picture
y = ###
Bitmap result = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
Canvas c = new Canvas(result);
Paint paint = new Paint();
c.drawBitmap(takenPhoto, x, y, paint); // draw your photo over canvas, keep indentation in mind (x and y)
c.drawBitmap(frameImage, 0, 0, paint); // now draw your frame on top of the image
return result;
}
Keep in mind that I have not tested the code and you might (actually, you'll have to) apply corrections.
Thanks for the help, I put my solution :
ImageView Resultado;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
Bitmap miBitmap = Bitmap.createBitmap(1000, 1000, Bitmap.Config.ARGB_8888);
Canvas c = new Canvas(miBitmap);
Paint paint = new Paint();
c.drawBitmap(drawableToBitmap(R.mipmap.ic_launcher), 500, 500, paint); // draw your photo over canvas, keep indentation in mind (x and y)
c.drawBitmap(drawableToBitmap(R.drawable.silver_frame), 0, 0, paint); // now draw your frame on top of the image
Resultado.setImageBitmap(miBitmap);
}
public Bitmap drawableToBitmap(int imagen){
return BitmapFactory.decodeResource(getApplicationContext().getResources(), imagen);
}
I want to construct TRIANGLE_STRIP shape with different texture in each sector.
Is it possible to use TRIANGLE_STRIP shape in this case?
I can't understand how to set horizontal and vertical coordinate for the texture mapping in this shape mode because each triangle shares vertex points with each other and because of that I can set horizontal and vertical coordinate for the texture mapping only for one image.
PImage img1, img2, img3, img4, img5;
void setup() {
size(800, 300, P3D);
img1 = loadImage("img1.png");
img2 = loadImage("img2.png");
....
textureMode(NORMAL);
// textureWrap(REPEAT);
}
void draw(){
background(0);
stroke(255);
beginShape(TRIANGLE_STRIP);
texture(img1);
vertex(30, 286,0,1);
vertex(158, 30, 0.5, 0);
vertex(286, 286,1,1);
texture(img2);
vertex(414, 30, ?, ?);
texture(img3);
vertex(542, 286, ?, ?);
texture(img4);
vertex(670, 30,?,?);
texture(img4);
vertex(798, 286,?,?);
endShape();
}
UPD
I wand to acheive the result similar to that animation:
https://mir-s3-cdn-cf.behance.net/project_modules/disp/7d7bf511219015.560f42336f0bd.gif
First of all I want to construct complicated object based on TRIANGLE_STRIP or QUAD_STRIP shape mode and after that just change z coordinate of vertexes.
So I just took image with such inscription and cut it on different images for each sector of shape.
If someone knows how to make it in more easy way I will be very thankful for help.
Step 1: Create a small sketch that simply displays a triangle strip (without any texturing) over the entire space you want to take up. Here's an example that fills the whole screen:
void setup() {
size(300, 200);
}
void draw() {
background(0);
stroke(0);
beginShape(TRIANGLE_STRIP);
vertex(0, 200);
vertex(0, 0);
vertex(50, 200);
vertex(100, 0);
vertex(150, 200);
vertex(200, 0);
vertex(250, 200);
vertex(300, 0);
vertex(300, 200);
endShape();
}
The goal is to make sure your vertexes cover the area you want your image to cover. You want something that looks like this:
This will also make it easier to map the vertex coordinates to image texture coordinates.
Step 2: Create an image that you want to use as a texture. I'll use this one:
Step 3: For each vertex you're drawing on the screen, figure out where in the image that point is. If a point is in the middle of the screen, then you need to figure out the position of the middle of the image. That's your values for u and v.
Alternatively, you can use textureMode(NORMAL) so you can specify u and v as normalized values between 0 and 1. The middle of the image becomes point (.5, .5).
Which approach you take is up to you, but in either case you have to map the screen vertex positions to the image u, v positions. I'll use the normalized values here:
PImage img;
void setup() {
size(300, 200, P3D);
img = loadImage("test.png");
textureMode(NORMAL);
}
void draw() {
background(0);
stroke(0);
beginShape(TRIANGLE_STRIP);
texture(img);
vertex(0, 200, 0, 1);
vertex(0, 0, 0, 0);
vertex(50, 200, .16, 1);
vertex(100, 0, .33, 0);
vertex(150, 200, .5, 1);
vertex(200, 0, .66, 0);
vertex(250, 200, .83, 1);
vertex(300, 0, 1, 0);
vertex(300, 200, 1, 1);
endShape();
}
Step 4: Now you can modify the position of one of the vertexes, and you'll morph the image drawn on screen:
PImage img;
void setup() {
size(300, 200, P3D);
img = loadImage("test.png");
textureMode(NORMAL);
}
void draw() {
background(0);
stroke(0);
beginShape(TRIANGLE_STRIP);
texture(img);
vertex(0, 200, 0, 1);
vertex(0, 0, 0, 0);
vertex(50, 200, .16, 1);
vertex(100, 0, .33, 0);
vertex(mouseX, mouseY, .5, 1);
vertex(200, 0, .66, 0);
vertex(250, 200, .83, 1);
vertex(300, 0, 1, 0);
vertex(300, 200, 1, 1);
endShape();
}
You can play around with it to get the exact effect you're looking for, but following these steps should be your general approach. Note that you only need to use a single image, and you need to figure out the u and v values for every vertex you draw on screen. Start with a triangle mesh that displays the image normally, and then modify that to morph your image.
Also note that I could have done a lot of this calculation programatically. For example, instead of hard-coding the value 150, I could have used width/2.0. But first you need to understand the relationship between the x,y on screen and the u,v in the texture. Once you understand that relationship, you can calculate them programatically if you want.
I'm struggling to understand how to merge 4 pictures together in java, I want to copy each image to the merged image with the overlapping 20 pixels blended in a 50% merge. To give the merged image a 20 pixel boundary that is a blend of the appropriate portion of each image.
So a 4 image box with the images blended into each other by 20 pixels. Not sure how I should use the width and height of the images as it is very confusing.
Something like this. How to do it?
I got all of my info from: AlphaComposite, Compositing Graphics, Concatenating Images.
The following program is improved. It uses two methods: joinHorizontal and joinVertical to join the images. Inside the methods, the following happens
the second image is copied, but only the part that overlaps
the copied image is set at half alpha (transparency)
on the canvas of the 'return image', the first image is painted, followed by the second without the overlapping part
the copied image is painted onto the canvas.
the image is returned
Why do I only set one image at half alpha and not both?
Picture a clear, glass window:
Paint random points red so that half of the window is covered with red. Now, treat the window with the red dots as your new canvas.
Paint random points blue so that the new "canvas" is half covered with blue. The window won't be completely covered; you will still be able to see through it.
But let's imagine that we first painted the window red, and then painted half of it blue. Now, it will be half blue and half red, but not transparent at all.
public class ImageMerger {
/**
* #param args
* #throws IOException
*/
public static void main(String[] args) throws IOException {
// TODO Auto-generated method stub
BufferedImage img1 = //some code here
BufferedImage img2 = //some code here
BufferedImage img3 = //some code here
BufferedImage img4 = //some code here
int mergeWidth = 20; // pixels to merge.
BufferedImage merge = ImageMerger.joinVertical(
ImageMerger.joinHorizontal(img1, img2, mergeWidth),
ImageMerger.joinHorizontal(img3, img4, mergeWidth),mergeWidth);
//do whatever you want with merge. gets here in about 75 milliseconds
}
public static BufferedImage joinHorizontal(BufferedImage i1, BufferedImage i2, int mergeWidth){
if (i1.getHeight() != i2.getHeight()) throw new IllegalArgumentException("Images i1 and i2 are not the same height");
BufferedImage imgClone = new BufferedImage(mergeWidth, i2.getHeight(), BufferedImage.TYPE_INT_ARGB);
Graphics2D cloneG = imgClone.createGraphics();
cloneG.drawImage(i2, 0, 0, null);
cloneG.setComposite(AlphaComposite.getInstance(AlphaComposite.DST_IN, 0.5f));
cloneG.drawImage(i2, 0, 0, null);
BufferedImage result = new BufferedImage(i1.getWidth() + i2.getWidth()
- mergeWidth, i1.getHeight(), BufferedImage.TYPE_INT_ARGB);
Graphics2D g = result.createGraphics();
g.drawImage(i1, 0, 0, null);
g.drawImage(i2.getSubimage(mergeWidth, 0, i2.getWidth() - mergeWidth,
i2.getHeight()), i1.getWidth(), 0, null);
g.drawImage(imgClone, i1.getWidth() - mergeWidth, 0, null);
return result;
}
public static BufferedImage joinVertical(BufferedImage i1, BufferedImage i2, int mergeWidth){
if (i1.getWidth() != i2.getWidth()) throw new IllegalArgumentException("Images i1 and i2 are not the same width");
BufferedImage imgClone = new BufferedImage(i2.getWidth(), mergeWidth, BufferedImage.TYPE_INT_ARGB);
Graphics2D cloneG = imgClone.createGraphics();
cloneG.drawImage(i2, 0, 0, null);
cloneG.setComposite(AlphaComposite.getInstance(AlphaComposite.DST_IN, 0.5f));
cloneG.drawImage(i2, 0, 0, null);
BufferedImage result = new BufferedImage(i1.getWidth(),
i1.getHeight() + i2.getHeight() - mergeWidth, BufferedImage.TYPE_INT_ARGB);
Graphics2D g = result.createGraphics();
g.drawImage(i1, 0, 0, null);
g.drawImage(i2.getSubimage(0, mergeWidth, i2.getWidth(),
i2.getHeight() - mergeWidth), 0, i1.getHeight(), null);
g.drawImage(imgClone, 0, i1.getHeight() - mergeWidth, null);
return result;
}
}