Why does my smudge image algorithm make the image darker? - java

I'm trying to write a graphical effect where a circle moves around an image smudging the image as it goes (like the way the smudge tool in Gimp or Photoshop would work). The basic algorithm I'm using is:
the circle moves from position A to position B on the bitmap
copy a circle of pixels from position A into a temporary bitmap
draw this circle of pixels from the temporary bitmap to position B using alpha of about 50%.
This works fine and looks like what I would expect, where the image will look like it's getting smudged if the circle moves 1 pixel at a time over the image.
I now want to add some texture to the smudge effect. I have a bitmap that contains a picture of a paint blob. The algorithm from the above is modified to the following so the smudge takes the shape of this paint blob:
as before
replace the temporary bitmap pixels with the paint blob texture then copy the circle of pixels from position A into the temporary bitmap but only keep the pixels that match up against paint blob pixels (i.e. use Porter-Duff "source in destination" mode when drawing the circle into the temporary bitmap).
as before
This almost works and it looks like it's fine initially but gradually the smudging makes the colors in my image darker! If the circle passes over the same area several times, the colors eventually change to black. Any ideas what I could be doing wrong?
I've implemented the above in Android. I happened upon this post about bitmaps in Android (like my paint blob texture) being loaded with "premultiplied alpha", where the author says it caused his images to become darker because of it:
http://www.kittehface.com/2010/06/androidbitmap-and-premultiplied-alpha.html
I suspect I'm suffering from a similar problem but I don't understand what's going on well enough and don't know how to fix it. Does anyone have hints at what might be going on?

Well from first glance the reason the image is getting darker is because #3 in the first three steps. You overlaying a pixel over an existing pixel at 50%. You might want to consider using the mean of the original pixel value and the new pixel value. You might want to research some blurring algorithms.

Related

How to draw border aound image

I have a ImageView with image, how can I draw border around image. The main problem is that image not a rectangle or circle and not cover full View. For example I want make something like this:
This is not so trivial. But I think if you use these steps, you should be able to pull this off:
1) Extract Bitmap from ImageView (or instead just take it directly from the resource you are using).
2) Iterate over all the pixels. If one of the neighbor pixels is not empty (transparent/white) and current pixel is empty then set pixel to red (do this only after you finished the iteration).
3) Set bitmap back to ImageView.

libGDX rendering TextureRegion renders too much

I want to develop a simple 2D side scrolling game using libGDX.
My world contains many different 64x64 pixel blocks that are drawn by a SpriteBatch using a camera to fit the screen. My 640x640px resource file contains all these images. The block textures are positioned at (0, 0), (0, 64), (64, 0), ... and so on in my resource file.
When my app launches, I load the texture and create many different TextureRegions:
texture = new Texture(Gdx.files.internal("texture.png"));
block = new TextureRegion(texture, 0, 0, 64, 64);
block.flip(false, true);
// continue with the other blocks
Now, when I render my world, everything seems fine. But some blocks (about 10% of my blocks) are drawn as if the TextureRegion's rectangle was positioned wrong - it draws the bottommost pixel row of the above (in the resource texture) block's texture as its topmost pixel row. Most of the blocks are rendered correctly and I checked that I entered the correct position multiple times.
The odd thing is, that when I launch the game on my computer - instead of my android device - the textures are drawn correctly!
When searching for solutions, many people refer to the filter, but neither of both Linear and Nearest works for me. :(
Hopefully, I was able to explain the problem in an accessible way and you have any ideas how to fix that (= how to draw only the texture region that I want to draw)!
Best regards
EDIT: The bug does only appear at certain positions. When I draw two blocks with the same texture at different positions, one of them is drawn correctly and the other is not.. I don't get it....
You should always leave empty space between your images when packing into one texture, because if you use FILTER_LINEAR (which I think is default) for every pixel it will sample from the four nearest pixels. And if your images are without empty pixels padding,for all edge pixels it will get pixels from the neighbor image.
So three options to solve your issue:
Manually add space between images in you texture file
Stop using FILTER_LINEAR (but you will get ugly results if you are not drawing in the native image dimentions e.g. scaling the image)
Use the Libgdx Texture Packer, it has a build it functionality to do just that, when you pack your images

Java drawImage interpolation

My question involves the drawImage method in Java Graphics2D (this is for a desktop app, not Android).
My BufferedImage that I'd like to draw contains high resolution binary data, most pixels are black but I have some sparse green pixels (the green pixels represent data points from an incoming raw data stream). The bitmap is quite large, larger than my typical panel size. I made it large so I could zoom in and out. The problem is when I zoom out I lose some of my green pixels .. as an example if my image is 1000 pixels and by panel is 250 pixels, I'd lose 1 out of 4 pixels in each direction (X and Y). If I use nearest neighbor interpolation when I scale the pixels can just disappear to black. If I use something like bilinear interpolation my green pixel will get recolored to somewhere between black and green.
I understand all this behavior, but my question is that is there any way to get the behavior I want, which is to make sure if any pixels is non-black I want it to be drawn at it's full intensity. Perhaps something like a "max-hold" interpolation.
I realize I could probably do what I want by drawing shape primitive over a black background, and maybe this is what I'll have to do. But there is a reason I'm using bitmaps (has to do with the fact that I'm showing the data in a falling spectrogram-type display - and it does have a mode where all the pixels could be colored and not just black and green).
Thanks,
You could look at the implementation of drawImage and override it to get your desired behaviour, however probably the core of the scaling uses hardware acceleration, so re implementing it in Java would be really slow.
You could look into JOGL, but my impression is that, if your pixels are really sparse, just drawing the green pixels on a black background (or over an image) would be both easy to code and very fast.
You could even have an heuristic switching between painting the dots to scaling the image if the number of dots starts being too high.

Crop and keep the size of an image android

This is a really hard question to explain in words (well it is for me anyway). I need to be able to take an image (bitmap) and crop the image down to a certain size in the centre of the screen but keeping the size of the image the same. Hopefully the picture below can explain what I mean:
So the image as a whole is cropped down to the square in the middle but is not stretched across the screen and remains in the centre, so basically removing the pointless part of the image but keeping to co-ordinates of the pixels the same.
So let's say you have done your face detection, and have found one face in your image. Your image is 320 x 240, and the face is bound by the rectangle with location 100,40 and width 20 x 30. Now what would you like to do with that information? I'll do my best to help, but you'll probably need to clear up any poor assumptions on my part.
First, you can grab the face and store it into a new bitmap with something like Bitmap.createBitmap():
Bitmap face = Bitmap.createBitmap(largeSource, 100, 40, 20, 30);
This should be done outside of the draw loop, like in onCreate or some other initialization step.
It sounds like you've got some container (ImageView? Custom View with overridden onDraw?) which is housing your large image. And now you want to just draw the face in that container, at its original position? If you've got a custom view, that's as simple as the following in your onDraw:
canvas.drawBitmap(face, 100, 40, facePaint);
If you're using an ImageView instead, I'd suggest going to a custom-drawn view instead, since it sounds like you need some fine-grained drawing control.
Finally, if you've got a bunch of these faces, create a new FaceObj POJO object, which just has a bitmap, x, and y coordinate. As you detect faces, add them to an ArrayList, and then iterate over this in in your onDraw to draw all your faces:
faces.add(new FaceObj(Bitmap.createBitmap(largeSource, 100, 40, 20, 30), 100, 40);
...
foreach(FaceObj f : faces)
canvas.drawBitmap(f.bitmap, f.x, f.y, facePaint);
If I understand you don't really want to crop your image but "hide" any pixels around the square.
There are many ways to do this depending on what you are trying to do. For example you can fill the uninteresting part of the picture with black or make it transparent.
This way the coordinates of your "cropped" picture will remain the same on the screen.
If all you are interested in is the center of the image, then one really easy way to do this is to just add the android:scaleType="center" attribute to your ImageView, and set the ImageView to the crop size you want. If you're wanting it to be positionable, though, that's a different story.
Using Canvas.drawBitmap() you might be able to work to copy a part of the image to a different bitmap, and discard the other. With this particular version of the method, you can send in the array of colors you get with getPixels(), and set an offset, and the width and height that you want to copy. The stride parameter is important though, as it needs to be set to the width of the original image, even if your final image will be smaller, as it's pulling pixels from the original image.

How can I cut an image using a color pattern?

I am developing a small program which cuts images by the color.
That's will be easiest to explain using this example image:
And I want to create a new image just with the purple form, without the black frame.
Does anyone have any ideas? I am using Java 2D so I think that I need to create an Object "Shape" with the purple area of the first image.
If the image is literally like the one you show, you could:
load the image into a BufferedImage (with ImageIO.read())
create a new BufferedImage of the same size, ensuring it has an alpha layer (e.g. set its type to BufferedImage.TYPE_4BYTE_ABGR)
"manually" go through each pixel in turn in the loaded BufferedImage, getting the pixel colour with getRGB() and checking if it's black
if the colour is black, set the corresponding pixel to transparent in the new image, else to the original colour from the first image (see setRGB() method)
save the new image (with ImageIO.write())
There are fancier ways, but this simple method is nice and understandable and will work fine for images of the type you showed.
You need to use some flood-fill algorithm that finds the boundries of the purple area:
Wikipedia has a page on it with excellent pseudo code and animations.
http://en.wikipedia.org/wiki/Flood_fill

Categories