here is sample java code which pixelates selected area on image.
pixelate sample in java
i want to achieve similar functionality in android.
searched a lot but dint find any example.
any help will be appreciated.
1.try to search for resize/re-sampling algorithms
like bilinear filtering ...
2.average color will do it too
make nested for loops x0,y0 through pixelated image (small resolution)
compute average color of the pixel area in source image (big resolution)
this is also done by 2 nested for loops x1,y1 through pixel area
just sum all R,G,B values separately
then divide them by number of pixels
store resulting color to pixelated image pixel(x0,y0)
[Notes]
if your pixelated image is the same resolution (or pixels' are area of pixels)
then instead of pixelated pixel fill entire pixel area by average color
Related
I was wondering whether or not you could apply a tint to a colored image to make a black and white version of that image. For example, having a landscape, applying a tint, and the new photo being only black and white. I'm dealing with an image that I cannot directly change the RGB pixel values for, which is why my only option is to overlay something on top of it. I attempted some Google searches, but to no avail, and unfortunately, I'm not an expert in color theory.
In essence, I'm looking for something that would accomplish this for any color:
COLOR + MAGIC VALUE = GRAY-SCALED (desaturated) COLOR
To be clear, 'magic value' is the color that I hope exists. Any help would be much appreciated!
TL;DR
There is no magic colour value that will do this for you.
The long version
Let's assume that the grayscale value for a pixel is equivalent to the average values of R, G and B.
If we represent the colour of a pixel as {R,G,B} and let's say you have a 2x2 image:
{10,20,30} {30,50,70}
{90,70,80} {90,130,110}
.. and you want to convert this to:
{20,20,20} {50,50,50}
{80,80,80} {110,110,110}
The magic values would need to be:
{10,0,-10} {20,0,-20}
{-10,10,0} {20,-10,0}
As shown, this magic colour value has to be different for each pixel (but can be calculated from the original image).
I am currently working with a DICOM project in java. I am calculating the dicom values such as depth, off-axis ratio, width and height. I want to create a sample image with the calculated width and height. I should be a gray scale image, with varying densities of gray color in appropriate area. I have created a sample image with imagemagick, using concentric circles. Bit am not getting the exact image. It does not look like an actual dicom image. The variation of gray color does not have a linear behavior. Sample image is attached. Please suggest any other method to create dicom image. The density values are available in a list. depending upon the distance from the center, the gray color also changes according to the density value provided.
Pixel Data or image in DICOM data set is always rectangular and image is described using Rows (0028, 0010) and Columns (0028, 0011) attributes. So the best way to implement this is to draw your circular image on a rectangular background. Fill the background with a color that is not present in your image. Use the Pixel Padding Value (0028,0120) to specify the color used to pad grayscale images (those with a Photometric Interpretation of MONOCHROME1 or MONOCHROME2) to rectangular format.
If you do not have any color to spare (with-in the bit stored), you can add Overlay Plane or Display Shutter to the data set to mask the area that is not part of image.
My question involves the drawImage method in Java Graphics2D (this is for a desktop app, not Android).
My BufferedImage that I'd like to draw contains high resolution binary data, most pixels are black but I have some sparse green pixels (the green pixels represent data points from an incoming raw data stream). The bitmap is quite large, larger than my typical panel size. I made it large so I could zoom in and out. The problem is when I zoom out I lose some of my green pixels .. as an example if my image is 1000 pixels and by panel is 250 pixels, I'd lose 1 out of 4 pixels in each direction (X and Y). If I use nearest neighbor interpolation when I scale the pixels can just disappear to black. If I use something like bilinear interpolation my green pixel will get recolored to somewhere between black and green.
I understand all this behavior, but my question is that is there any way to get the behavior I want, which is to make sure if any pixels is non-black I want it to be drawn at it's full intensity. Perhaps something like a "max-hold" interpolation.
I realize I could probably do what I want by drawing shape primitive over a black background, and maybe this is what I'll have to do. But there is a reason I'm using bitmaps (has to do with the fact that I'm showing the data in a falling spectrogram-type display - and it does have a mode where all the pixels could be colored and not just black and green).
Thanks,
You could look at the implementation of drawImage and override it to get your desired behaviour, however probably the core of the scaling uses hardware acceleration, so re implementing it in Java would be really slow.
You could look into JOGL, but my impression is that, if your pixels are really sparse, just drawing the green pixels on a black background (or over an image) would be both easy to code and very fast.
You could even have an heuristic switching between painting the dots to scaling the image if the number of dots starts being too high.
This is what I need to do: with camera, take picture of body's forearm, get average color of that picture, and then compare with available skin level (from white to black), to see what your skin color is (bright, dark ...) using Java. I'm stuck in getting the average color of a picture, or any other way to compare color of 2 pictures?
Does anyone would help me out with this please
Thank you
I'm assuming you know from grade school how to take the average of a set of numbers.
A color is represented by three numbers: the RGB (red, green, blue) values. To find the average of a set of colors, just find the average of their respective red, green, and blue components.
I have had to do similar in the past. The most efficient way I found to do it was pass it to the GPU which is made for this sort of stuff. I took the image and scaled it down to a 1x1 image. The GPU will average the pixel colors as it shrinks the image thus the pixel color of the 1x1 image will be the whole images average pixel color. Do this twice and compare the results.
Heres a good start for color averaging
http://www.compuphase.com/graphic/scale3.htm
I'm trying to write a graphical effect where a circle moves around an image smudging the image as it goes (like the way the smudge tool in Gimp or Photoshop would work). The basic algorithm I'm using is:
the circle moves from position A to position B on the bitmap
copy a circle of pixels from position A into a temporary bitmap
draw this circle of pixels from the temporary bitmap to position B using alpha of about 50%.
This works fine and looks like what I would expect, where the image will look like it's getting smudged if the circle moves 1 pixel at a time over the image.
I now want to add some texture to the smudge effect. I have a bitmap that contains a picture of a paint blob. The algorithm from the above is modified to the following so the smudge takes the shape of this paint blob:
as before
replace the temporary bitmap pixels with the paint blob texture then copy the circle of pixels from position A into the temporary bitmap but only keep the pixels that match up against paint blob pixels (i.e. use Porter-Duff "source in destination" mode when drawing the circle into the temporary bitmap).
as before
This almost works and it looks like it's fine initially but gradually the smudging makes the colors in my image darker! If the circle passes over the same area several times, the colors eventually change to black. Any ideas what I could be doing wrong?
I've implemented the above in Android. I happened upon this post about bitmaps in Android (like my paint blob texture) being loaded with "premultiplied alpha", where the author says it caused his images to become darker because of it:
http://www.kittehface.com/2010/06/androidbitmap-and-premultiplied-alpha.html
I suspect I'm suffering from a similar problem but I don't understand what's going on well enough and don't know how to fix it. Does anyone have hints at what might be going on?
Well from first glance the reason the image is getting darker is because #3 in the first three steps. You overlaying a pixel over an existing pixel at 50%. You might want to consider using the mean of the original pixel value and the new pixel value. You might want to research some blurring algorithms.