This is a really hard question to explain in words (well it is for me anyway). I need to be able to take an image (bitmap) and crop the image down to a certain size in the centre of the screen but keeping the size of the image the same. Hopefully the picture below can explain what I mean:
So the image as a whole is cropped down to the square in the middle but is not stretched across the screen and remains in the centre, so basically removing the pointless part of the image but keeping to co-ordinates of the pixels the same.
So let's say you have done your face detection, and have found one face in your image. Your image is 320 x 240, and the face is bound by the rectangle with location 100,40 and width 20 x 30. Now what would you like to do with that information? I'll do my best to help, but you'll probably need to clear up any poor assumptions on my part.
First, you can grab the face and store it into a new bitmap with something like Bitmap.createBitmap():
Bitmap face = Bitmap.createBitmap(largeSource, 100, 40, 20, 30);
This should be done outside of the draw loop, like in onCreate or some other initialization step.
It sounds like you've got some container (ImageView? Custom View with overridden onDraw?) which is housing your large image. And now you want to just draw the face in that container, at its original position? If you've got a custom view, that's as simple as the following in your onDraw:
canvas.drawBitmap(face, 100, 40, facePaint);
If you're using an ImageView instead, I'd suggest going to a custom-drawn view instead, since it sounds like you need some fine-grained drawing control.
Finally, if you've got a bunch of these faces, create a new FaceObj POJO object, which just has a bitmap, x, and y coordinate. As you detect faces, add them to an ArrayList, and then iterate over this in in your onDraw to draw all your faces:
faces.add(new FaceObj(Bitmap.createBitmap(largeSource, 100, 40, 20, 30), 100, 40);
...
foreach(FaceObj f : faces)
canvas.drawBitmap(f.bitmap, f.x, f.y, facePaint);
If I understand you don't really want to crop your image but "hide" any pixels around the square.
There are many ways to do this depending on what you are trying to do. For example you can fill the uninteresting part of the picture with black or make it transparent.
This way the coordinates of your "cropped" picture will remain the same on the screen.
If all you are interested in is the center of the image, then one really easy way to do this is to just add the android:scaleType="center" attribute to your ImageView, and set the ImageView to the crop size you want. If you're wanting it to be positionable, though, that's a different story.
Using Canvas.drawBitmap() you might be able to work to copy a part of the image to a different bitmap, and discard the other. With this particular version of the method, you can send in the array of colors you get with getPixels(), and set an offset, and the width and height that you want to copy. The stride parameter is important though, as it needs to be set to the width of the original image, even if your final image will be smaller, as it's pulling pixels from the original image.
Related
I would like to get the Texture data out of an image, so it may be rendered directly, but I see no straightforward way of making it, google searches show the other way around only.
I am able to get the Drawable, but the interface doesn't specify the exact texture data, so there's no way that I see to convert an Image actor into a Texture.
What I am trying to achieve is to have brushes, where the data of the brushes are stored in the image of an ImageButton. So upon clicking on an ImageButton The user would be able to draw on the screen based on the image stored in the buttons.
How might I be able to do that?
After digging deeper, I found a solution on the forums:
Your best bet is probably to draw your textures to a FrameBuffer with
a 1:1 scale, then you can use
ScreenUtils.getFrameBufferPixmap
To copy a pixmap off the FrameBuffer and resize as you wish.
Something like this:
fb.begin();
sb.begin();
sb.draw(texture, 0, 0);
sb.end();
Pixmap pm = ScreenUtils.getFrameBufferPixmap(0, 0, width, height);
fb.end();
It's not exactly a texture, but it is exactly what I would have needed: the pixel data extracted from a drawable.
After much thought I decided that it is easier to just store the texture data redundantly next to to the Imagebuttons, and query directly the stored data, instead of using the ImageButton as a kind of container, which it shouldn't be.
I want to develop a simple 2D side scrolling game using libGDX.
My world contains many different 64x64 pixel blocks that are drawn by a SpriteBatch using a camera to fit the screen. My 640x640px resource file contains all these images. The block textures are positioned at (0, 0), (0, 64), (64, 0), ... and so on in my resource file.
When my app launches, I load the texture and create many different TextureRegions:
texture = new Texture(Gdx.files.internal("texture.png"));
block = new TextureRegion(texture, 0, 0, 64, 64);
block.flip(false, true);
// continue with the other blocks
Now, when I render my world, everything seems fine. But some blocks (about 10% of my blocks) are drawn as if the TextureRegion's rectangle was positioned wrong - it draws the bottommost pixel row of the above (in the resource texture) block's texture as its topmost pixel row. Most of the blocks are rendered correctly and I checked that I entered the correct position multiple times.
The odd thing is, that when I launch the game on my computer - instead of my android device - the textures are drawn correctly!
When searching for solutions, many people refer to the filter, but neither of both Linear and Nearest works for me. :(
Hopefully, I was able to explain the problem in an accessible way and you have any ideas how to fix that (= how to draw only the texture region that I want to draw)!
Best regards
EDIT: The bug does only appear at certain positions. When I draw two blocks with the same texture at different positions, one of them is drawn correctly and the other is not.. I don't get it....
You should always leave empty space between your images when packing into one texture, because if you use FILTER_LINEAR (which I think is default) for every pixel it will sample from the four nearest pixels. And if your images are without empty pixels padding,for all edge pixels it will get pixels from the neighbor image.
So three options to solve your issue:
Manually add space between images in you texture file
Stop using FILTER_LINEAR (but you will get ugly results if you are not drawing in the native image dimentions e.g. scaling the image)
Use the Libgdx Texture Packer, it has a build it functionality to do just that, when you pack your images
I am working on a java game which deals with a bunch of sprite sheets, and I was wondering whether I should have separate sprite sheets for left and right animations, or if I should just draw up the left sprites and reverse the image programatically for the right animations. Which one would be a better practice, and would either of them perform better? I was thinking of having the image flipping occur during Game init(). If I do go with direction flipping (saving a lot of time in photoshop), would this be a safe way to go:
playerAttackLeft = spriteSheet.crop(0, 0, 400, 400); //(x, y, width, height)
playerAttackRight = spriteSheet.crop(400, 0, -400, 400);
?
You should rotate the image and use it instead of getting new one.
When you read an image then it will take space for JVM to load it.
Here is an example when I did it on my computer.
I had an image of 100kb and when I loaded it in my class, It has taken approximately 1mb of space.
reading an image is costly process
And on the other hand if you will use rotated image it will not only save your space but also your time too (space and time complexity, both) because rotating image will take much less time then to read an external image.
I am making a game in Java. I made a planet seen from outer space and I want to make it appear like the planet is slowly rotating. But I don't know how to rotate a image. I need a simple command that rotates my image 1 degree around its own center once. Any help?
This is what I want to do:
Image
Take a look at these tutorials:
Java2D: Have Fun With Affine Transform
Coordinate Translations and Rotations: Example Code
Transforming Shapes, Text, and Images
What you are describing is not rotating an image, but changing an image to represent a 3D rotation of the object in the image.
Ideally you wouldn't be working with this as an image but rather as a 3D object with a different camera angle. Then you would simply rotate the camera around the object and display the resulting image to the user.
However if you're set on doing this as an image, then you need to create a different images representing various states of rotation of your planet and have a separate thread that would replace the displayed image with the next one in sequence, at repeated intervals. Search the web for "java image animation" - there are plenty of tutorials on how to do this.
If you want to rotate an image in 2d space, you can use something like this:
Image image = ...
Graphics2D g2d = ...; //
g2d.translate(170, 0); // If needed
g2d.rotate(1); // Rotate the image by 1 radian
//or g2d.rotate(180.0/3.14); to rotate by 1 degree
g2d.drawImage(image, 0, 0, 200, 200, observer);
I'm trying to write a graphical effect where a circle moves around an image smudging the image as it goes (like the way the smudge tool in Gimp or Photoshop would work). The basic algorithm I'm using is:
the circle moves from position A to position B on the bitmap
copy a circle of pixels from position A into a temporary bitmap
draw this circle of pixels from the temporary bitmap to position B using alpha of about 50%.
This works fine and looks like what I would expect, where the image will look like it's getting smudged if the circle moves 1 pixel at a time over the image.
I now want to add some texture to the smudge effect. I have a bitmap that contains a picture of a paint blob. The algorithm from the above is modified to the following so the smudge takes the shape of this paint blob:
as before
replace the temporary bitmap pixels with the paint blob texture then copy the circle of pixels from position A into the temporary bitmap but only keep the pixels that match up against paint blob pixels (i.e. use Porter-Duff "source in destination" mode when drawing the circle into the temporary bitmap).
as before
This almost works and it looks like it's fine initially but gradually the smudging makes the colors in my image darker! If the circle passes over the same area several times, the colors eventually change to black. Any ideas what I could be doing wrong?
I've implemented the above in Android. I happened upon this post about bitmaps in Android (like my paint blob texture) being loaded with "premultiplied alpha", where the author says it caused his images to become darker because of it:
http://www.kittehface.com/2010/06/androidbitmap-and-premultiplied-alpha.html
I suspect I'm suffering from a similar problem but I don't understand what's going on well enough and don't know how to fix it. Does anyone have hints at what might be going on?
Well from first glance the reason the image is getting darker is because #3 in the first three steps. You overlaying a pixel over an existing pixel at 50%. You might want to consider using the mean of the original pixel value and the new pixel value. You might want to research some blurring algorithms.