I want the intensity of every point of a picture, which function should I call? Let say the pixels of my picture is 125 x 125, I want the intensity of (0,0) to (125,125), is there a function so that I give a coordinate, it will return me an intensity like this
function(0,123) --> intensity?
In ImageJ macro language:
result = getPixel(0,123);
print(result);
From other scripting languages, you can use the ImageJ Java API, e.g. the ImageProcessor#getPixel(int x, int y) method (ImageJ1) or the net.imglib2.Positionable#setPosition(int[] position) and net.imglib2.Sampler#get() methods (ImageJ2)
For example, in Python:
(using ImageJ1 structures)
from ij import IJ
imp = IJ.getImage()
result = imp.getProcessor().getPixel(0, 123)
print result
(using ImageJ2 structures and # parameter annotation)
# #Dataset img
ra = img.randomAccess()
ra.setPosition([0, 123])
result = ra.get()
print result
ImageJ uses the abstract class ImageProcessor, which is "roughly" an extension of the BufferedImage found into java.awt.
You can use one of the method proposed into the ImageProcessor:
public abstract int getPixel(int x, int y);
public abstract int get(int x, int y);
public abstract int get(int index);
But it gets a little bit messy when you want to access pixels encoding on multiple channels (colors images).
Here is a simple way to access pixels using the raster:
ImageProcessor myimage = ... ;
BufferedImage image = myimage.getBufferedImage() ;
int intensity = image.getRaster().getSample(0, 0, 0) ; // Get the value.
image.getRaster().setSample(0, 0, 0, NewValue) ; // Set a new value.
Thats the simplest way, but not the fastest. The fastest being the direct access to the values, which are stored into the DataBuffer, but then you have to deal with the type.
Related
In Java, I have a function like this:
public void setPixel(int x, int y, boolean on);
It sets a virtual black and white pixel, given whether it is on or not.
How can I call this function so the resulting display will be four times larger?
I tried this:
int x = 3;
int y = 3;
setPixel(x, y, true);
setPixel(x+1, y+1, true);
setPixel(x+2, y+2, true);
setPixel(x+3, y+3, true);
But naturally, it overlapped when I tried to draw something. How should I call the method?
While I'm tagging this Java, the concept could apply to any language.
Answering on these assumptions: setPixel sets a single pixel to white or black (if on is true, to black, else to white). You want to use this function to get a B&W image and make it four times larger. The code you provided is wrong and just makes a diagonal instad of a 4x4 block. Is this correct? If so:
A way to draw a 4 times larger image would then be, for example, to have a "getPixel(x,y)" which gets you whether the pixel at (x,y) is on in the original image and then start painting somewhere else in 4x4 blocks. Whenever you move by one pixel in either X or Y direction when getting the values of your original image, you move by 4 in your new image to scale. So then what you intended to do maybe was something like this?
void setBlock(int x, int y, boolean on, int scale)
for(int i=0; i < scale; i++){
for(int j=0; j < scale; j++){
setPixel(scale*x + i, scale*y + j, on);
And then iterate over your original image's coordinates doing something like this?
setBlock(x, y, getPixel(x, y), 4);
I want to save not only Red, Green, Blue and Alpha in my Image.
Every pixel also needs Z-Depth information, like, how far it was away from the camera.
Furthermore, I need to display the Image in a JFrame, so I can't use my custom Image class, but instead I need a BufferedImage or a subclass of it.
The Z-Depth shouldn't be visible in the JFrame. I just want to store it.
I've read much about the BufferedImage class.
I assume that I will have to extend classes like SampleModel or ColorModel, but I can't figure out how that should be done.
A nice solution would be to instantiate a new BufferedImage but with a custom Pixelclass that also stores depth, without actually extending the BufferedImage.
But any solution and any idea will be appreciated!
Does anybody know, which classes I have to extend, in order to save more information in every Pixel?
Well, I still don't understand why putting extra data into the BufferedImage would be more "flexible and extensible" in this case, so I'd probably just go with something like this myself:
public class DeepImage {
private final BufferedImage image;
private final float[] zIndex; // Must be image.width * image.height long
// TODO: Constructors and accessors as needed...
}
But of course, you could put the Z-depth into a BufferedImage, if you really like to:
public class DeepBufferedImage extends BufferedImage {
private final float[] zIndex;
public DeepImage(final BufferedImage image, final float[] zIndex) {
super(image.getColorModel(), image.getRaster(), image.getColorModel().isAlphaPremultiplied(), null);
if (zIndex.length != image.getWidth() * image.getHeight()) {
throw new IllegalArgumentException("bad zIndex length");
}
this.zIndex = zIndex; // Obviously not safe, but we'll go for performance here
}
public float getZ(int x, int y) {...};
public void setZ(int x, int y, float z) {...};
public float[] getZ(int x, int y, int w, int h) {...};
public void setZ(int x, int y, int w, int h, float[] z) {...};
}
The above class would work just like a normal BufferedImage for all cases, except it also happens to have a Z-index for each pixel.
Alternatively, you could make the Z-index part of the DataBuffer/SampleModel/Raster, but I think that would also require a custom ColorModel (or ColorSpace), and require quite a huge effort. These classes don't normally work well with "mixed" data types. Of course you could pack all your samples into one long per pixel, to avoid mixing data types:
long rgbaz = getRGB(x, y) << 32l | Float.toIntBits(getZ(x, y));
But, this would obviously kill performance.
So, in short, I just don't see the benefit from doing that. Especially as you don't want to visualize anything but the RGBA values, and there is also no file format I know of, that would support such a pixel layout*.
All of that said, there might still be a reason for you to implement this, I just don't see the need for it, given the requirements mentioned so far. Could of course be that I am missing something important. :-)
*) The TIFF format would support storing it, if you were using float samples for all 5 (R,G,B,A,Z) channels, or the packed long (64 bits) samples. However, I think it would be a lot simpler (and be a lot more compatible), to just store a normal 32 bit (8,8,8,8) RGBA image, and then a separate 1 channel float image in a multipage TIFF.
I believe that you are looking for ImageComponent3D with allows you to define an 3D array of pixels.
ImageComponent3D(int format, int width, int height, int depth)
Constructs an 3D image component object using the specified format, width, height and depth.
ImageComponent3D Documentation
I need to extract a pixel region described by (2n+1) x (2m+1) centred on leftimage (xl,yl). n and m are user input parameters and xl and yl are already defined. Thus far I have this code:
for(int xl = n; xl < picOneGreyScale.getWidth() - n; xl++) {
for(int yl = m; yl < picOneGreyScale.getHeight() - m; yl++) {
//extract (2n+1) x (2m+1) pixel region centred on leftimage (xl,yl);
for(int nArea = xl-n; nArea < xl+n+1; nArea++) {
for(int mArea = yl-m; mArea < yl+m+1; mArea++) {
*code here*
}
}
I'm uncertain as to how to continue. I have defined a BufferedImage called leftRegion:
BufferedImage leftRegion = new BufferedImage((2*n+1),(2*m+1),BufferedImage.TYPE_BYTE_GRAY);
which I intend to use to "extract" my pixel region into. My thoughts thus far are, for where it says code here to extract the pixel at the current location (using getRGB?) and then nesting another for loop to place this pixel within the correct x, y coordinates for leftRegion. I'm not sure how to do this however or if I'm thinking too complex. Alternatively it may be possible to use getRGB with extended arguments:
getRGB(int startX, int startY, int w, int h, int[] rgbArray, int offset, int scansize)
instead of the two inner for loops but again I'm not so hot on how to implement this. Finally there is a method for BufferedImage called copyData which looks like it might be relevant but I'm not sure how to use it. What's the best way to implement this? Many thanks as always.
Additional Information:
Okay so I'm trying to use the getSubImage method of the BufferedImage class:
leftRegion = picOneGreyScale.getSubimage(xl, yl, (2*n+1), (2*m+1));
only I'm getting an error "(y + height) is outside of Raster". How does getSubImage work exactly? Will the image be centered around xl, yl with the width and height being extended equally either side, or does it work differently? Am I even following the right path?
I figured it out. I simply replaced the two inner for loops with this:
leftRegion = picOneGreyScale.getSubimage(xl-n, yl-n, (2*n+1), (2*m+1));
I have the following problem. I have a charting program, and it's design is black, but the charts (that I get from the server as images) are light (it actually uses only 5 colors: red, green, white, black and gray).
To fit with the design inversion does a good job, the only problem is that red and green are inverted also (green -> pink, red -> green).
Is there a way to invert everything except those 2 colors, or a way to repaint those colors after inversion?
And how costly are those operations (since I get the chart updates pretty often)?
Thanks in advance :)
UPDATE
I tried replacing colors with setPixel method in a loop
for(int x = 0 ;x < chart.getWidth();x++) {
for(int y = 0;y < chart.getHeight();y++) {
final int replacement = getColorReplacement(chart.getPixel(x, y));
if(replacement != 0) {
chart.setPixel(x, y, replacement);
}
}
}
Unfortunetely, the method takes too long (~650ms), is there a faster way to do it, and will setPixels() method work faster?
Manipulating a bitmap is much faster if you copy the image data into an int array by calling getPixels only once, and don't call any function inside the loop. Just manipulate the array, then call setPixels at the end.
Something like that:
int length = bitmap.getWidth()*bitmap.getHeight();
int[] array = new int[length];
bitmap.getPixels(array,0,bitmap.getWidth(),0,0,bitmap.getWidth(),bitmap.getHeight());
for (int i=0;i<length;i++){
// If the bitmap is in ARGB_8888 format
if (array[i] == 0xff000000){
array[i] = 0xffffffff;
} else if ...
}
}
bitmap.setPixels(array,0,bitmap.getWidth(),0,0,bitmap.getWidth(),bitmap.getHeight());
If you have it available as BufferedImage, you can access its raster and edit it as you please.
WritableRaster raster = my_image.getRaster();
// Edit all the pixels you wanna change in the raster (green -> red, pink -> green)
// for (x,y) in ...
// raster.setPixel(x, y, ...)
my_image.setData(raster);
OK seen that you're really only using 5 colors it's quite easy.
Regarding performances, I don't know about Android but I can tell you that in Java using setRGB is amazingly slower than getting back the data buffer and writing directly in the int[].
When I write "amazingly slower", to give you an idea, on OS X 10.4 the following code:
for ( int x = 0; x < width; x++ ) {
for ( int y = 0; y < height; y++ ) {
img.setRGB(x,y,0xFFFFFFFF);
}
}
can be 100 times (!) slower than:
for ( int x = 0; x < width; x++ ) {
for ( int y = 0; y < height; y++ ) {
array[y*width+x] = 0xFFFFFFFF;
}
}
You read correctly: one hundred time. Measured on a Core 2 Duo / Mac Mini / OS X 10.4.
(of course you need to first get access to the underlying int[] array but hopefully this shouldn't be difficult)
I cannot stress enough that the problem ain't the two for loops: in both cases it's the same unoptimized for loops. So it's really setRGB that is the issue here.
I don't know it works on Android, but you probably should get rid of setRGB if you want something that performs well.
A quick way would be to use AvoidXfermode to repaint just those colors you want changed - you could then switch between any colors you want. You just need to do something like this:
// will change red to green
Paint change1 = new Paint();
change1.setColor(Color.GREEN);
change1.setXfermode(new AvoidXfermode(Color.RED, 245, AvoidXfermode.Mode.TARGET));
Canvas c = new Canvas();
c.setBitmap(chart);
c.drawRect(0, 0, width, height, change1);
// rinse, repeat for other colors
You may need to play with the tolerance for the AvoidXfermode, but that should do what you want a lot faster than a per-pixel calculation. Also, make sure your chart image is in ARGB8888 mode. By default, Android tends to work with images in RGB565 mode, which tends to mess up color calculations like you want to use - to be sure, you can make sure your image is both in ARGB8888 mode and mutable by calling Bitmap chart = chartFromServer.copy(Config.ARGB_8888, true); before you setup the Xfermode.
Clarification: to change other colors, you wouldn't have to re-load the images all over again, you would just have to create other Paints with the appropriate colors you want changed like so:
// changes green to red
Paint change1 = new Paint();
change1.setColor(Color.GREEN);
change1.setXfermode(new AvoidXfermode(Color.RED, 245, AvoidXfermode.Mode.TARGET));
// changes white to blue
Paint change2 = new Paint();
change2.setColor(Color.BLUE);
change2.setXfermode(new AvoidXfermode(Color.WHITE, 245, AvoidXfermode.Mode.TARGET));
// ... other Paints with other changes you want to apply to this image
Canvas c = new Canvas();
c.setBitmap(chart);
c.drawRect(0, 0, width, height, change1);
c.drawRect(0, 0, width, height, change2);
//...
c.drawRect(0, 0, width, height, changeN);
I want to do a simple color to grayscale conversion using java.awt.image.BufferedImage. I'm a beginner in the field of image processing, so please forgive if I confused something.
My input image is an RGB 24-bit image (no alpha), I'd like to obtain a 8-bit grayscale BufferedImage on the output, which means I have a class like this (details omitted for clarity):
public class GrayscaleFilter {
private BufferedImage colorFrame;
private BufferedImage grayFrame =
new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
I've succesfully tried out 2 conversion methods until now, first being:
private BufferedImageOp grayscaleConv =
new ColorConvertOp(ColorSpace.getInstance(ColorSpace.CS_GRAY), null);
protected void filter() {
grayscaleConv.filter(colorFrame, grayFrame);
}
And the second being:
protected void filter() {
WritableRaster raster = grayFrame.getRaster();
for(int x = 0; x < raster.getWidth(); x++) {
for(int y = 0; y < raster.getHeight(); y++){
int argb = colorFrame.getRGB(x,y);
int r = (argb >> 16) & 0xff;
int g = (argb >> 8) & 0xff;
int b = (argb ) & 0xff;
int l = (int) (.299 * r + .587 * g + .114 * b);
raster.setSample(x, y, 0, l);
}
}
}
The first method works much faster but the image produced is very dark, which means I'm losing bandwidth which is unacceptable (there is some color conversion mapping used between grayscale and sRGB ColorModel called tosRGB8LUT which doesn't work well for me, as far as I can tell but I'm not sure, I just suppose those values are used). The second method works slower, but the effect is very nice.
Is there a method of combining those two, eg. using a custom indexed ColorSpace for ColorConvertOp? If yes, could you please give me an example?
Thanks in advance.
public BufferedImage getGrayScale(BufferedImage inputImage){
BufferedImage img = new BufferedImage(inputImage.getWidth(), inputImage.getHeight(), BufferedImage.TYPE_BYTE_GRAY);
Graphics g = img.getGraphics();
g.drawImage(inputImage, 0, 0, null);
g.dispose();
return img;
}
There's an example here which differs from your first example in one small aspect, the parameters to ColorConvertOp. Try this:
protected void filter() {
BufferedImageOp grayscaleConv =
new ColorConvertOp(colorFrame.getColorModel().getColorSpace(),
grayFrame.getColorModel().getColorSpace(), null);
grayscaleConv.filter(colorFrame, grayFrame);
}
Try modifying your second approach. Instead of working on a single pixel, retrieve an array of argb int values, convert that and set it back.
The second method is based on pixel's luminance therefore it obtains more favorable visual results. It could be sped a little bit by optimizing the expensive floating point arithmetic operation when calculate l using lookup array or hash table.
Here is a solution that has worked for me in some situations.
Take image height y, image width x, the image color depth m, and the integer bit size n. Only works if (2^m)/(x*y*2^n) >= 1.
Keep a n bit integer total for each color channel as you process the initial gray scale values. Divide each total by the (x*y) for the average value avr[channel] of each channel. Add (192 - avr[channel]) to each pixel for each channel.
Keep in mind that this approach probably won't have the same level of quality as standard luminance approaches, but if you're looking for a compromise between speed and quality, and don't want to deal with expensive floating point operations, it may work for you.