Context:
I'm trying to create an animation in java.
The animation is simply take an image and make it appear from the darkest pixels to the lightest.
The Problem:
The internal algorithm defining the pixels transformations is not my issue.
I'm new to Java and Computing in general. I've done a bit of research, and know that there are plenty of APIs that helps with image filters/transformations.
My problem is performance, understanding it.
To the implementation i've created a method that do the following:
Receives an BufferedImage.
Get the WritableRaster of the BufferedImage.
Using setSample and getSample, process and change pixel by pixel.
Return the BufferedImage.
After that, i use a Timer to call the method.
The returned BufferedImage is attached to a JButton via setIcon after each call.
With a 500x500 image my machine takes around 3ms to process each call.
For standard 1080p images it takes around 30ms, wich is about 33 frames per second.
My goal is to process/animate FullHD images at 30fps... And i will not be able to with the path I'm following. Not in most computers.
What i'm getting wrong? How i can make it faster? Using getDataBuffer or getPixels instead of getSample can improve it?
Thanks in advance! And sorry my english.
Partial Conclusions:
Thanks to some help here. I've changed concept. Instead of using getSample and setSample I've stored the pixels ARGB informations of the BufferedImage into an array. So i process the array and copy it all at once into a Raster of another BufferedImage.
The process time reduced from 30ms ( get/set sample ) to 1ms. ( measured poorly, but in the same machine, enviroment and code ).
Below is a little class i coded to implement it. The class can filter pixels only below a Brightness level, the other pixels become transparent ( alpha = 0 ).
Hope it help's who search for the same solution in the future. Be wary that I'm below rookie level in Java, so the code might be poorly organized/optimized.
import java.awt.Graphics2D;
import java.awt.image.*;
/**
* #author Psyny
*/
public class ImageAppearFX {
//Essencial Data
BufferedImage imgProcessed;
int[] RAWoriginal;
int[] RAWprocessed;
WritableRaster rbgRasterProcessedW;
//Information about the image
int x,y;
int[] mapBrightness;
public ImageAppearFX(BufferedImage inputIMG) {
//Store Dimensions
x = inputIMG.getWidth();
y = inputIMG.getHeight();
//Convert the input image to INT_ARGB and store it.
this.imgProcessed = new BufferedImage(x, y, BufferedImage.TYPE_INT_ARGB);
Graphics2D canvas = this.imgProcessed.createGraphics();
canvas.drawImage(inputIMG, 0, 0, x, y, null);
canvas.dispose();
//Create an int Array of the pixels informations.
//p.s.: Notice that the image was converted to INT_ARGB
this.RAWoriginal = ((DataBufferInt) this.imgProcessed.getRaster().getDataBuffer()).getData();
//Dupplication of original pixel array. So we can make changes based on original image
this.RAWprocessed = this.RAWoriginal.clone();
//Get Raster. We will need the raster to write pixels on
rbgRasterProcessedW = imgProcessed.getRaster();
//Effect Information: Store brightness information
mapBrightness = new int[x*y];
int r,g,b,a,greaterColor;
// PRocess all pixels
for(int i=0 ; i < this.RAWoriginal.length ; i++) {
a = (this.RAWoriginal[i] >> 24) & 0xFF;
r = (this.RAWoriginal[i] >> 16) & 0xFF;
g = (this.RAWoriginal[i] >> 8) & 0xFF;
b = (this.RAWoriginal[i] ) & 0xFF;
//Search for Stronger Color
greaterColor = r;
if( b > r ) {
if( g > b ) greaterColor = g;
else greaterColor = b;
} else if ( g > r ) {
greaterColor = g;
}
this.mapBrightness[i] = greaterColor;
}
}
//Effect: Show only in a certain percent of brightness
public BufferedImage BrightnessLimit(float percent) {
// Adjust input values
percent = percent / 100;
// Pixel Variables
int hardCap = (int)(255 * percent);
int r,g,b,a,bright;
// Process all pixels
for(int i=0 ; i < this.RAWoriginal.length ; i++) {
//Get information of a pixel of the ORIGINAL image
a = (this.RAWoriginal[i] >> 24) & 0xFF;
r = (this.RAWoriginal[i] >> 16) & 0xFF;
g = (this.RAWoriginal[i] >> 8) & 0xFF;
b = (this.RAWoriginal[i] ) & 0xFF;
//Brightness information of that same pixel
bright = this.mapBrightness[i];
//
if( bright > hardCap ) {
a = 0;
}
this.RAWprocessed[i] = ((a << 24) + (r << 16) + (g << 8) + ( b )); //Write ARGB in byte format
}
//Copy the processed array into the raster of processed image
rbgRasterProcessedW.setDataElements(0, 0, x, y, RAWprocessed);
return imgProcessed;
}
//Return reference to the processed image
public BufferedImage getImage() {
return imgProcessed;
}
}
While the time difference resulting from the change doesn't prove that the repeated searching is the bottleneck, it does strongly implicate it.
If you are willing/able to trade memory for time, I would first sort a list of all the pixel locations by brightness. Next, I would use the sorted list during the animation to look up the next pixel to copy.
An extra piece of advice: use one of Java's built in sorting methods. It's educational to make your own, but learning how to sort doesn't seem to be your goal here. Also, if my guess about the bottleneck is wrong, you'll want to minimize your time pursuing this answer.
Related
I don't understand how WritableRaster class of Java works. I tried looking at the documentation but don't understand how it takes values from an array of pixels. Plus, I am not sure what the array of pixels consists.
Here I explain.
What I want to do is : Shamir's Secret Sharing on images. For that I need to fetch an image in BuferedImage image. I take a secret image. Create shares by running a 'function' on each pixel of the image. (basically changing the pixel values by something)
Snippet:
int w = image.getWidth();
int h = image.getHeight();
for (int i = 0; i < h; i++)
{
for (int j = 0; j < w; j++)
{
int pixel = image.getRGB(j, i);
int red = (pixel >> 16) & 0xFF;
int green = (pixel >> 8) & 0xFF;
int blue = (pixel) & 0xFF;
pixels[j][i] = share1(red, green, blue);
// Now taking those rgb values. I change them using some function and return an int value. Something like this:
public int share1 (r, g, b)
{
a1 = rand.nextInt(primeNumber);
total1 = r+g+b+a1;
new_pixel = total1 % primeNumber;
return new_pixel;
}
// This 2d array pixels has all the new color values, right? But now I want to build an image using this new values. So what I did is.
First converted this pixels array to a list.
Now this list has pixel values of the new image. But to build an image using RasterObj.setPixels() method, I need an array with RGB values [I MIGHT BE WRONG HERE!]
So I take individual values of a list and find rgb values and put it consecutively in a new 1D array
pixelvector..something like this (r1,g1,b1,r2,g2,b2,r3,g3,b3...)
Size of the list is wh because it contains single pixel value of each pixel.
BUT, Size of the new array pixelvector will become wh*3 since it contains r,g,b values of each pixel..
Then to form image I do this: Snippet
BufferedImage image_share1 = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
WritableRaster rast = (WritableRaster) image_share1.getData();
rast.setPixels(0, 0, w, h, pixelvector);
image_share1.setData(rast);
ImageIO.write(image_share1,"JPG",new File("share1.jpg"));
If I put an array with just single pixel values in setPixels() method, it does not return from that function! But if I put an array with separate r,g,b values, it returns from the function. But doing the same thing for share1 , share 2 etc.. I am getting nothing but shades of blue. So, I am not even sure I will be able to reconstruct the image..
PS - This might look like a very foolish code I know. But I had just one day to do this and learn about images in Java. So I am doing the best I can.
Thanks..
A Raster (like WriteableRaster and its subclasses) consists of a SampleModel and a DataBuffer. The SampleModel describes the sample layout (is it pixel packed, pixel interleaved, band interleaved? how many bands? etc...) and dimensions, while the DataBuffer describes the actual storage (are the samples bytes, short, ints, signed or unsigned? single array or array per band? etc...).
For BufferedImage.TYPE_INT_RGB the samples will be pixel packed (all 3 R, G and B samples packed into a single int for each pixel), and data/transfer type DataBuffer.TYPE_INT.
Sorry for not answering your question regarding WritableRaster.setPixels(...) directly, but I don't think it's the method you are looking for (in most cases, it's not). :-)
For your goal, I think what you should do is something like:
// Pixels in TYPE_INT_RGB format
// (ie. 0xFFrrggbb, where rr is two bytes red, gg two bytes green etc)
int[] pixelvector = new int[w * h];
BufferedImage image_share1 = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
WritableRaster rast = image_share1.getRaster(); // Faster! No copy, and live updated
rast.setDataElements(0, 0, w, h, pixelvector);
// No need to call setData, as we modified image_share1 via it's raster
ImageIO.write(image_share1,"JPG",new File("share1.jpg"));
I'm assuming the rest of your code for modifying each pixel value is correct. :-)
But just a tip: You'll make it easier for yourself (and faster due to less conversion) if you use a 1D array instead of a 2D array. I.e.:
int[] pixels = new int[w * h]; // instead of int[][] pixels = new int[w][h];
// ...
for (int y = 0; y < h; y++) {
for (int x = 0; x < w; x++) {
// ...
pixels[y * w + x] = share1(red, green, blue); // instead of pixels[x][y];
}
}
I am working on a module in which I have to make background of bitmap image transparent. Actually, I am making an app like "Stick it" through which we can make sticker out of any image. I don't know from where to begin.
Can someone give me a link or a hint for it?
Original Image-
After making background transparent-
This is what I want.
I can only provide some hints on how to approach your problem. You need to do a Image Segmenation. This can be achived via the k-means algotithm or similar clustering algorithms. See this for algorithms on image segmantation via clustering and this for a Java Code example. The computation of the clustering can be very time consumeing on a mobile device. Once you have the clustering you can use this approach to distinguish between the background and the foreground. In general all you picture should have a bachground color which differs strongly from the foreground otherwise it is not possible for the clustering to distunguish between them. It can also happen that a pixel inside of you foreground is assigned to the cluster of the background beacuase it has a similar color like your background. To prevent this from happening you could use this approach or a region grwoth algorithm. Afterward you can let you user select the clusters via touch and remove them. I also had the same problems with my Android App. This will give you a good start and once you have implemented the custering you just need to tewak the k parameter of the clustering to get good results.
Seems like a daunting task. If you are talking about image processing if I may understand then you can try https://developers.google.com/appengine/docs/java/images/
Also if you want to mask the entire background ( I have not tried Stick it) the application needs to understand the background image map. Please provide some examples so that I can come up with more definitive answers
One possibility would be to utilize the floodfill operation in the openCV library. There are lots of examples and tutorials on how to do similar stuff to what you want and OpenCV has been ported to Android. The relevant terms to Google are of course "openCV" and "floodfill".
For this kind of task(and app) you'll have to use openGL. Usually when working on openGL you based your fragment shader on modules you build in Matlab. Once you have the fragment shader it's quite easy to apply it on image. check this guide how to do it.
Here's a link to remove background from image in MatLab
I'm not fully familiar with matlab and if it can generate GLSL code by itself(the fragment shader). But even if it doesn't - you might want to learn GLSL yourself because frankly - you are trying to build a graphics app and Android SDK is somehow short when using it for images manipulation, and most important is that without a strong hardware-acceleration engine behind it - I cannot see it works smooth enough.
Once you'll have the figure image - you can apply it on transparent background easily like this:
Canvas canvas = new Canvas(canvasBitmap);
canvas.drawColor(Color.TRANSPARENT);
BitmapDrawable bd = (BitmapDrawable) getResources().getDrawable(R.drawable.loading);
Bitmap yourBitmap = bd.getBitmap();
Paint paint = new Paint();
canvas.drawBitmap(yourBitmap, 0, 0, paint);
Bitmap newBitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(),image.getConfig());
Canvas canvas = new Canvas(newBitmap);
canvas.drawColor(Color.TRANSPARENT);
canvas.drawBitmap(image, 0, 0, null);
OR
See this
hope this wll helps you
if you are working in Android you might need a Buffer to get the pixels from the image - it's a intBuffer and it reduces the memory usage enormously... to get data from and stor data into the Buffer you have three methods (you can skip that part if you don't have 'large' images):
private IntBuffer buffer;
public void copyImageIntoBuffer(File imgSource) {
final Bitmap temp = BitmapFactory.decodeFile(imgSource
.getAbsolutePath());
buffer.rewind();
temp.copyPixelsToBuffer(buffer);
}
protected void copyBufferIntoImage(File tempFile) throws IOException {
buffer.rewind();
Bitmap temp = Bitmap.createBitmap(imgWidth, imgHeight,
Config.ARGB_8888);
temp.copyPixelsFromBuffer(buffer);
FileOutputStream out = new FileOutputStream(tempFile);
temp.compress(Bitmap.CompressFormat.JPEG, 90, out);
out.flush();
out.close();
}
public void mapBuffer(final File tempFile, long size) throws IOException {
RandomAccessFile aFile = new RandomAccessFile(tempFile, "rw");
aFile.setLength(4 * size); // 4 byte pro int
FileChannel fc = aFile.getChannel();
buffer = fc.map(FileChannel.MapMode.READ_WRITE, 0, fc.size())
.asIntBuffer();
}
now you can use the Buffer to get the pixels and modify them as desired... (i've copyied a code snipped that used a Progress bar on my UI and therefore needed a Handler/ProgressBar... when i did this i was working on bigger images and implemented a imageFilter (Gauss-Filter,Grey-Filter, etc.... just delete what is not needed)
public void run(final ProgressBar bar, IntBuffer buffer, Handler mHandler, int imgWidth, int imgHeight, int transparentColor ) {
for (int dy = 0; dy < imgHeight; dy ++){
final int progress = (dy*100)/imgHeight;
for (int dx = 0; dx < imgWidth; dx ++ ){
int px = buffer.get();
//int a = (0xFF000000 & px);
//int r = (0x00FF0000 & px) >> 16;
//int g = (0x0000FF00 & px) >> 8;
//int b = (0x000000FF & px);
//this makes the color transparent
if (px == transparentColor) {
px = px | 0xFF000000;
}
//r = mid << 16;
//g = mid << 8;
//b = mid;
//int col = a | r | g | b;
int pos = buffer.position();
buffer.put(pos-1, px);
}
// Update the progress bar
mHandler.post(new Runnable() {
public void run() {
bar.setProgress(progress);
}
});
}
}
if you really have small images, you can get the pixels directly during onCreate() or even better create a Buffer (maybe a HashMap or a List) before you start the Activity...
I know this is an expensive operation and I already tried to use the robot.getPixelColor() function but it works slow, can only calculate like 5 times in a second.
What I'm saying is that 5 times is too small for what I actually want to do, but 20 should be enough. So I ask you if you can suggest me some optimisations to make to my actual code in order to get this result.
My code is:
while (true) {
color = robot.getPixelColor(x, y);
red = color.getRed();
green = color.getGreen();
blue = color.getBlue();
// do a few other operations in constant time
}
I don't know if this would help, but x and y don't change inside the while loop. So it's all the time the same pixel coordinates.
Thanks in advance!
EDIT: The pixel color will be taken from a game which will run at the same time with the java program, it will keep changing. The only thing is that are always the same coordinates.
I'm assuming the color is represented as a 32-bit int encoded as ARGB. In that case, instead of calling a method, you could just do bit masking to extract the colors, and that may end up being faster because you don't waste the overhead of calling a method. I'd recommend doing something like this:
int color = robot.getPixelColor(x,y);
int redBitMask = 0x00FF0000;
int greenBitMask = 0x0000FF00;
int blueBitMask = 0x000000FF;
int extractedRed = (color & redBitMask) >> 16;
int extractedGreen = (color & greenBitMask) >> 8;
int extractedBlue = (color & blueBitMask);
Bit shifting and bitwise operations tend to be very fast.
I have written a code to find brightest pixe(laser dot)l in camera
viewfinder and draw a circle at that coordinate.
Ideally the circle should be on dot. but due to some prob(maybe screen
resolution/coding error) the circle is a little displaced.
I am attaching the screenshots and code.
I will be highly grateful if you can pinpoint error or give your
valuable suggestion.
Problems:
The dot is tracked properly but coordinates arent exact(as seem in screenshots)
The Fireworks mode doesn't work on S2 but works on Galaxy Ace.
The app crashes in motorolla android phone
Code+Screenshot
http://wikisend.com/download/553910/re41postqueryregardingdecodeyuv420spmrgbdatamyuvd.zip
protected void onDraw(Canvas canvas) {
if (mBitmap != null)
{
int canvasWidth = canvas.getWidth();
int canvasHeight = canvas.getHeight();
int newImageWidth = canvasWidth;
int marginWidth = (canvasWidth - newImageWidth)/2;
// Convert from YUV to RGB
decodeYUV420SP(mRGBData, mYUVData, mImageWidth, mImageHeight);
int maxR=255;x=0;y=0; int k=0;
for (int i = 0; i < mRGBData.length; i++) {
if((((mRGBData[i] >> 16) & 0x000000FF)+((mRGBData[i] >> 8) & 0x000000FF)+((mRGBData[i]) & 0x000000FF))>maxR)
{
maxR=(mRGBData[i] >> 16) & 0x000000FF;
maxR+=(mRGBData[i] >> 8) & 0x000000FF;
maxR+=(mRGBData[i] ) & 0x000000FF;
y=i%mImageWidth;
x=i/(mImageWidth);
}
}
String status= "Laser coords: ("+maxR+", "+y+")";
canvas.drawText(status, marginWidth+10, 60, mPaintYellow);
canvas.drawCircle(y, x, 10, mPaintYellow);
}
super.onDraw(canvas);
}
Just taking the brightest pixel you find will most definitely not help you, judging from your screenshot. The laserdot there ist probably 15x15 pixels large, with most of the area being oversaturated, i.e. all of those pixels are maxed out and will be "the brightest pixel".
A better heuristic would probably be, to take the coordinates of all pixels whose brightnessvalue (assuming you can use the HSL colormodel) is above a given threshold and then calculate some form of weighted average (with the weight of each pixel being relative to its brightness). For testingpurposes just calculating the average would probably do.
I want to do a simple color to grayscale conversion using java.awt.image.BufferedImage. I'm a beginner in the field of image processing, so please forgive if I confused something.
My input image is an RGB 24-bit image (no alpha), I'd like to obtain a 8-bit grayscale BufferedImage on the output, which means I have a class like this (details omitted for clarity):
public class GrayscaleFilter {
private BufferedImage colorFrame;
private BufferedImage grayFrame =
new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
I've succesfully tried out 2 conversion methods until now, first being:
private BufferedImageOp grayscaleConv =
new ColorConvertOp(ColorSpace.getInstance(ColorSpace.CS_GRAY), null);
protected void filter() {
grayscaleConv.filter(colorFrame, grayFrame);
}
And the second being:
protected void filter() {
WritableRaster raster = grayFrame.getRaster();
for(int x = 0; x < raster.getWidth(); x++) {
for(int y = 0; y < raster.getHeight(); y++){
int argb = colorFrame.getRGB(x,y);
int r = (argb >> 16) & 0xff;
int g = (argb >> 8) & 0xff;
int b = (argb ) & 0xff;
int l = (int) (.299 * r + .587 * g + .114 * b);
raster.setSample(x, y, 0, l);
}
}
}
The first method works much faster but the image produced is very dark, which means I'm losing bandwidth which is unacceptable (there is some color conversion mapping used between grayscale and sRGB ColorModel called tosRGB8LUT which doesn't work well for me, as far as I can tell but I'm not sure, I just suppose those values are used). The second method works slower, but the effect is very nice.
Is there a method of combining those two, eg. using a custom indexed ColorSpace for ColorConvertOp? If yes, could you please give me an example?
Thanks in advance.
public BufferedImage getGrayScale(BufferedImage inputImage){
BufferedImage img = new BufferedImage(inputImage.getWidth(), inputImage.getHeight(), BufferedImage.TYPE_BYTE_GRAY);
Graphics g = img.getGraphics();
g.drawImage(inputImage, 0, 0, null);
g.dispose();
return img;
}
There's an example here which differs from your first example in one small aspect, the parameters to ColorConvertOp. Try this:
protected void filter() {
BufferedImageOp grayscaleConv =
new ColorConvertOp(colorFrame.getColorModel().getColorSpace(),
grayFrame.getColorModel().getColorSpace(), null);
grayscaleConv.filter(colorFrame, grayFrame);
}
Try modifying your second approach. Instead of working on a single pixel, retrieve an array of argb int values, convert that and set it back.
The second method is based on pixel's luminance therefore it obtains more favorable visual results. It could be sped a little bit by optimizing the expensive floating point arithmetic operation when calculate l using lookup array or hash table.
Here is a solution that has worked for me in some situations.
Take image height y, image width x, the image color depth m, and the integer bit size n. Only works if (2^m)/(x*y*2^n) >= 1.
Keep a n bit integer total for each color channel as you process the initial gray scale values. Divide each total by the (x*y) for the average value avr[channel] of each channel. Add (192 - avr[channel]) to each pixel for each channel.
Keep in mind that this approach probably won't have the same level of quality as standard luminance approaches, but if you're looking for a compromise between speed and quality, and don't want to deal with expensive floating point operations, it may work for you.