I have a 2D array which contains values from 0-1.0 and my goal is to create an image where each element corresponds to a colour between white and black. Higher the value, the more white and vice versa.
So far I'm come up with the following Java code:
BufferedImage img = new BufferedImage(fg.getLengthInFrames(), 32 ,BufferedImage.TYPE_BYTE_GRAY);
for(int i =0; i < 32; i++) {
for(int j =0; j < fg.getLengthInFrames(); j++) {
colSTI.setRGB(j, i, 0); // wrong
}
}
Obviously this wont work as it is just a dummy function but say I had an array element, arr[0][5] = 0.85. How would I be able to convert this into a rgb value that is equal to the corresponding colour value?
Use TYPE_INT_ARGB image type (a color image supports gray scale no problem). Use
colSTI.setRGB(j, i, new Color(gray,gray,gray).getRGB());
to set the integer color value for the BufferedImage, here gray is you shade of gray int the range 0 .. 255. If the image is huge, to avoid creating and garbage collecting lots of Color objects, you can implement some kind of gray scale to color caching.
This approach supports 255 levels of gray only. You may need more complex approach if more levels are required. If you are representing some measured physical value in your image, I would propose to use color as well for representing different levels. Such false color images are common in science as they allow to see more levels easier.
Related
I'm trying to understand the algorithm behind the super fast blur algorithm. Below is the port to java that works with android as a test. Looks like this version makes some optimisations that I don't quite understand and there aren't any comments either.
void fastblur(Bitmap img, int radius){
if (radius<1){
return;
}
int w= img.getWidth();
int h=img.getHeight();
int wm=w-1;
int hm=h-1;
int wh=w*h;
int div=radius+radius+1;
int r[]=new int[wh];
int g[]=new int[wh];
int b[]=new int[wh];
int rsum,gsum,bsum,x,y,i,p,p1,p2,yp,yi,yw;
int vmin[] = new int[Math.max(w,h)];
int vmax[] = new int[Math.max(w,h)];
int[] pix= new int[w*h];
img.getPixels(pix, 0, w, 0,0,w, h);
int dv[]=new int[256*div];
for (i=0;i<256*div;i++){
dv[i]=(i/div);
}
yw=yi=0;
for (y=0;y<h;y++){
rsum=gsum=bsum=0;
for(i=-radius;i<=radius;i++){
p=pix[yi+Math.min(wm,Math.max(i,0))];
rsum+=(p & 0xff0000)>>16;
gsum+=(p & 0x00ff00)>>8;
bsum+= p & 0x0000ff;
}
for (x=0;x<w;x++){
r[yi]=dv[rsum];
g[yi]=dv[gsum];
b[yi]=dv[bsum];
if(y==0){
vmin[x]=Math.min(x+radius+1,wm);
vmax[x]=Math.max(x-radius,0);
}
p1=pix[yw+vmin[x]];
p2=pix[yw+vmax[x]];
rsum+=((p1 & 0xff0000)-(p2 & 0xff0000))>>16;
gsum+=((p1 & 0x00ff00)-(p2 & 0x00ff00))>>8;
bsum+= (p1 & 0x0000ff)-(p2 & 0x0000ff);
yi++;
}
yw+=w;
}
for (x=0;x<w;x++){
rsum=gsum=bsum=0;
yp=-radius*w;
for(i=-radius;i<=radius;i++){
yi=Math.max(0,yp)+x;
rsum+=r[yi];
gsum+=g[yi];
bsum+=b[yi];
yp+=w;
}
yi=x;
for (y=0;y<h;y++){
pix[yi]=0xff000000 | (dv[rsum]<<16) | (dv[gsum]<<8) | dv[bsum];
if(x==0){
vmin[y]=Math.min(y+radius+1,hm)*w;
vmax[y]=Math.max(y-radius,0)*w;
}
p1=x+vmin[y];
p2=x+vmax[y];
rsum+=r[p1]-r[p2];
gsum+=g[p1]-g[p2];
bsum+=b[p1]-b[p2];
yi+=w;
}
}
img.setPixels(pix,0, w,0,0,w,h);
}
Correct me if I'm wrong by my speculations:
What does the below loop do? Is it associated with pre-computing the kernel table? What about div, is that the kernel table size? I guess what I'm trying to ask is, what is dv[] supposed to store?
int dv[]=new int[256*div];
for (i=0;i<256*div;i++){
dv[i]=(i/div);
}
Looking at the horizontal pass:
The below loop looks like it's summing up the separate RGB values, but it only does this at the starting pixel for each row, since yi is only incremented once we finish processing all pixels up until the width is reached. Is this because we end up adding to the RGB sums as we process the pixels in the next loop?
for(i=-radius;i<=radius;i++){
int ind = yi+Math.min(wm,Math.max(i,0));
p=pix[ind];
rsum+=(p & 0xff0000)>>16;
gsum+=(p & 0x00ff00)>>8;
bsum+= p & 0x0000ff;
}
Are we only selecting the left most pixel and right most pixel according to the radius and the current pixel position?
if(y==0){
vmin[x]=Math.min(x+radius+1,wm);
vmax[x]=Math.max(x-radius,0);
}
p1=pix[yw+vmin[x]];
p2=pix[yw+vmax[x]];
Next is what is confusing me the most:
Am I correct to say that were getting the difference between right and left pixels and adding that the running RGB totals that we have?
rsum+=((p1 & 0xff0000)-(p2 & 0xff0000))>>16;
gsum+=((p1 & 0x00ff00)-(p2 & 0x00ff00))>>8;
bsum+= (p1 & 0x0000ff)-(p2 & 0x0000ff);
I haven't had a look at the second pass since this is pretty much going over my head. Any clarification would be appreciated and any commentary on the loop on the vertical pass would be helpful as well thanks.
Since I wrote that one I guess I can explain best :-)
int dv[]=new int[256*div];
for (i=0;i<256*div;i++){
dv[i]=(i/div);
}
This line precalculates a lookup table for all the possible mean values that can occur. This is to avoid costly division in the inner loop. On some systems doing the division directly instead of a doing an array lookup might actually be faster nowadays, but when I wrote it the lookup was the faster way.
for(i=-radius;i<=radius;i++){
int ind = yi+Math.min(wm,Math.max(i,0));
p=pix[ind];
rsum+=(p & 0xff0000)>>16;
gsum+=(p & 0x00ff00)>>8;
bsum+= p & 0x0000ff;
}
The reason why this algorithm is fast is that it uses a sliding window and thus reduces the number of required pixel lookups. The window slides from the left edge to the right (and in the second pass from top to bottom) and only adds one pixel at the right and removes one from the left. The code above initializes the window by prefilling the window with the leftmost edge pixel depending on the kernel size.
if(y==0){
vmin[x]=Math.min(x+radius+1,wm);
vmax[x]=Math.max(x-radius,0);
}
p1=pix[yw+vmin[x]];
p2=pix[yw+vmax[x]];
This is the part that adds a new pixel but at the same time handles the border conditions (when the window tries to read or remove pixels outside the bitmap).
rsum+=((p1 & 0xff0000)-(p2 & 0xff0000))>>16;
gsum+=((p1 & 0x00ff00)-(p2 & 0x00ff00))>>8;
bsum+= (p1 & 0x0000ff)-(p2 & 0x0000ff);
rsum, gsum and bsum is the accumulated sum of pixels inside the sliding window. What you see is the new pixel on the right side being added to the sum and the leftmost pixel i nthe window being removed from the sum.
This box blur algorithm is outlined in this paper from 2001.
What it's basically doing is blurring the image twice; first in the horizontal direction, and then in the vertical direction. The end result is the same as if you had calculated the convolution of the image with a square box 2r+1 pixels across (i.e., from x-r to x+r, and from y-r to y+r at each point).
AT each step, the blurred pixel value is simply the average of all the pixels in this range. This can be calculated quickly by keeping a running total at each point. When you move the range to the right (down) one pixel, you subtract the pixel at the left (top) end and add the pixel at the right (bottom) end. You still have to divide these running totals by 2r+1, but this can be sped up by precomputing fixed-point values of n/(2r+1) for (0≤n<256) and storing them in dv[] (with an 8-bit fractional part).
The short summation loops at the start of each scan are just there to calculate the initial values of the running total.
And with a bit of juggling with max() and min() to avoid accessing out-of-range pixels, that's about all there is to it.
Hints when using CompoundBlur
You will notice from the gradient tables that the blur is going to build from the outside inward, so it will blur the edges first and then blur the center. In order to blur from the center towards the edges just take all the values in the mul_table and subtract 255 from them: This inverts the bitmap- you can imagine the brightness of a pixel in your gradient map is equivalent to the blur radius used there - white pixel big blur, black pixel small blur.
Method for Quick Inverting:
Using Sublime Text and Microsoft Excel you can easily invert the values...
Sublime Text:
Get all the values into columns with the commas lined up vertically, then by clicking and dragging with the mousewheel you can select downward and hit enter to put a single number on a single line. Now click and drag with mousewheel again and insert a "- 255" after every value, and a "=" before every value (Also click and drag to select all commas and delete them). Now select all lines and copy.
Final format for Excel should be: = (original mul_table value) - 255 ... i.e. = 512 - 255
Excel: After copying formatted values in Sublime, paste to the top-left most cell in Excel, and Excel will evaluate "=512-255" for you and instantly create new inverted values. Copy all cells and paste back into your js file and insert commas back in.
Your CompoundBlur will now blur from the center towards the edges..
I am making a game that has campfire objects. What I want to do is to brighten all pixels in a circle around each campfire. However, looping through every pixel and changing those within the radius is not all that efficient and makes the game run at ~7 fps. Ideas on how to either make this process efficient or simulate light differently?
I haven't written the code for the fires but this is the basic loop to check each pixel/change its brightness based on a number:
public static BufferedImage updateLightLevels(BufferedImage img, float light)
{
BufferedImage brightnessBuffer = new BufferedImage(img.getWidth(), img.getHeight(), BufferedImage.TYPE_4BYTE_ABGR);
brightnessBuffer.getGraphics().drawImage(img, 0, 0, null);
for(int i = 0; i < brightnessBuffer.getWidth(); i++)
{
for(int a = 0; a < brightnessBuffer.getHeight(); a++)
{
//get the color at the pixel
int rgb = brightnessBuffer.getRGB(i, a);
//check to see if it is transparent
int alpha = (rgb >> 24) & 0x000000FF;
if(alpha != 0)
{
//make a new color
Color rgbColor = new Color(rgb);
//turn it into an hsb color
float[] hsbCol = Color.RGBtoHSB(rgbColor.getRed(), rgbColor.getGreen(), rgbColor.getBlue(), null);
//lower it by the certain amount
//if the pixel is already darker then push it all the way to black
if(hsbCol[2] <= light)
hsbCol[2] -= (hsbCol[2]) - .01f;
else
hsbCol[2] -= light;
//turn the hsb color into a rgb color
int rgbNew = Color.HSBtoRGB(hsbCol[0], hsbCol[1], hsbCol[2]);
//set the pixel to the new color
brightnessBuffer.setRGB(i, a, rgbNew);
}
}
}
return brightnessBuffer;
}
I apologize if my code is not clean, I'm self taught.
I can give you lots of approaches.
You're currently rendering on the CPU, and you're checking every single pixel. That's hardcore brute force, and brute force isn't what the CPU is best at. It works, but as you've seen, the performance is abysmal.
I'd point you in two directions that would massively improve your performance:
Method 1 - Culling. Does every single pixel really need to have its lighting calculated? If you could instead calculate a general "ambient light", then you could paint most of the pixels in that ambient light, and then only calculate the really proper lighting for pixels closest to lights; so lights throw a "spot" effect which fades into the ambient. That way you're only ever performing checks on a few of the pixels of the screen at a time (the circle area around each light). The code you posted just looks like it paints every pixel, I'm not seeing where the "circle" dropoff is even applied.
Edit:
Instead, sweep through the lights, and just loop through local offsets of the light position.
for(Light l : Lights){
for(int x = l.getX() -LIGHT_DISTANCE, x< l.getX() + LIGHT_DISTANCE, y++){
for(int y = l.getY() - LIGHT_DISTANCE, y < l.getY() + LIGHT_DISTANCE, y++){
//calculate light
int rgb = brightnessBuffer.getRGB(x, y);
//do stuff
}
}
You may want to add a check with that method so overlapping lights don't cause a bunch of rechecks, unless you DO want that behavior (ideally those pixels would be twice as bright)
Method 2 - Offhand calculation to the GPU. There's a reason we have graphics cards; they're specifically built to be able to number crunch those situations where you really need brute force. If you can offload this process to the GPU as a shader, then it'll run licketysplit, even if you run it on every pixel several times over. This will require you to learn graphics APIs however, but if you're working in java, LibGDX makes it very painless to render using the GPU and pass off a couple shaders to the GPU.
I am uncertain about the way in which you are going about calculating light values, but I do know that using the BufferedImage.getRGB() and BufferedImage.setRGB() methods is very slow.
I would suggest accessing the pixels of the BufferedImage directly from an array (much faster IMO)
to do this:
BufferedImage lightImage = new BufferedImage(width,height,BufferedImage.TYPE_INT_ARGB);
Raster r = lightImage.getRaster();
int[] lightPixels = ((DataBufferInt)r.getDataBuffer()).getData();
Now, changing any pixel in this array will show on your image. Note that the values used in this array are color values in the format of whatever format you defined your image with.
In this case it is TYPE_INT_ARGB meaning you will have to include the alpha value in the number when setting the coloar (RRGGBB*AA*)
Since this array is a 1D array, it is more difficult to access pixels using x and y co-ordinates. The following method is an implementation of accessing pixels from the lightPixels array more easily.
public void setLight(int x, int y,int[] array,int width, int value){
array[width*y+x] = value;
}
*note: width is the width of your level, or the width of the 2D array your level might exist as, if it was a 2D array.
You can also get pixels from the lightPixels array with a similar method, just excluding the value and returning the array[width*y+x].
It is up to you how you use the setLight() and getLight() methods but in the cases that I have encountered, using this method is much faster than using getRGB and setRGB.
Hope this helps
can any one suggest me the method to get actual pixel value of each pixel of a gray scale image?
i used this code to get pixel. But it just gives me red value of an pixel? I don't know that black-white or Gray Scale image contains RGB pixels or not, So please tell me that, the method i used is right or wrong? And also when i plotted histogram for my image it was exactly opposite to the imajeJ histogram for same image. So i guess my method to grab pixel of Gray Scale image is somewhere wrong
So If it is wrong what is the right way to get pixel.
here is my code
PlanarImage image = JAI.create("fileload", "C:\\16bit images\\alpXray.tiff");
BufferedImage bi = image.getAsBufferedImage();
int[] bins = new int[256];
int b=0;
for (int x = 0; x < bi.getWidth(); x++) {
for (int y = 0; y < bi.getHeight(); y++) {
int p= bi.getRaster().getSample(x, y,b); // (1) b=0 for red (2)b=1 for green(3) b=2 for bule
p=p/256;
bins[p]++;
}
}
every channel of a RGB image is in gray scale. in fact if you visualize a single channel (GIMP, photoshop) you see only gray shades.
usually to convert a rgb image to gray you take the three values of every channel and do a arithmetic mean.
or simply you take a single channel.
tell me if i misunderstood the question..
EDIT:
ok. if you have 3-channel gray scaled image probably you have 3 channel with same value on each channel. so simply take the value of single channel for each pixel.
if you have only one channel (of 16bit) take that pixel.
I am trying to access the pixels of image using method getRGB(). The image I use for this purose is 8-bit image i.e each pixel is represented by 8-bits, hence the possible values are 0-255.
the image I used was png 8-bit image hence the type 'type_byte_indexed'
if (type == BufferedImage.TYPE_BYTE_INDEXED) {
System.out.println("type.byte.indexed");
System.out.print(h+" "+w);
sourceImage.getRGB(0, 0, w, h, rgbs, 0, w); //rgbs is integer array
for (i = 0; i <10; i++) {
System.out.print(" "+rgbs[i]);
}
System.out.println("rgbs len: " + rgbs.length);
}
The output of the for loop is something ilke:
-12048344 -12174804 -12048344 -12174804 -12174804 .......
I obtain the r,g,b components from it and store in array :
Color c=new Color(rgbs[i]);
r=c.getRed();
g=c.getGreen();
b=c.getBlue();
Now how do I combine again these values so that I can use the setRGB method? Like for 24 bit image we can use
int rgb=65536*pixel[i]+256*pixel[i+1]+pixel[i+2];
The documentation clearly states that the returned values are in ARGB-form:
Returns an array of integer pixels in the default RGB color model (TYPE_INT_ARGB) and default sRGB color space
You can access the underlying buffer (that contains indexed pixels) with
byte[] data=((DataBufferByte)bufferedImage.getRaster().getDataBuffer()).getData(0);
there...
I have some question about my homework on image processing using java. My question :
how to get gray level value each pixel of the rgb image in java programming???
I just know a little about how to get rgb value each pixel by syntax image.getRGB(x,y) for return rgb value. I have no idea for to get gray level value each pixel of the image....
Thanks for advance
First you'll need to extract the red, green and blue values from each pixel that you get from image.getRGB(x, y). See this answer about that. Then read about converting color to grayscale.
I agree with the previous answer. Create the BufferedImage like BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY). From what I understand a raster is for reading pixel data and a writable raster for writing pixel data or updating pixels. I always use writable raster although this may not be the best way to do it, because you can read pixel data and set pixel values. You can get the raster by WritableRaster raster = image.getRaster(); you can then get the value of a pixel by using raster.getSample(x, y, 0); The 0 in the arguments is for the band you want to get which for gray scale images should be 0.
You could also set up a BufferedImage of type TYPE_BYTE_GRAY, and draw the picture into it, then get the raster data and find the values.
Complementing Daniel's answer:
WritableRaster wr = myBufferedImage.getRaster();
for(int i = 0; i < myBufferedImage.getWidth(); i++){
for(int j = 0; j < myBufferedImage.getHeight(); j++){
int grayLevelPixel = wr.getSample(i, j, 0);
wr.setSample(i, j, 0, grayLevelPixel); // Setting same gray level, will do nothing on the image, just to show how.
}
}