I'm trying to take a BufferedImage, apply a Fourier transform (using jtransforms), and write the data back to the BufferedImage. But I'm stuck creating a new Raster to set the results back, am I missing something here?
BufferedImage bitmap;
float [] bitfloat = null;
bitmap = ImageIO.read(new File("filename"));
FloatDCT_2D dct = new FloatDCT_2D(bitmap.getWidth(),bitmap.getHeight());
bitfloat = bitmap.getData().getPixels(0, 0, bitmap.getWidth(), bitmap.getHeight(), bitfloat);
dct.forward(bitfloat, false);
But I'm stumped trying to finish off this line, what should I give the createRaster function? The javadocs for createRaster make little sense to me:
bitmap.setData(Raster.createRaster(`arg1`, `arg2`, `arg3`));
I'm starting to wonder if a float array is even necessary, but there aren't many examples of jtransforms out there.
Don't create a new Raster. Use WritableRaster.setPixels(int,int,int,int,float[]) to write the array back to the image.
final int w = bitmap.getWidth();
final int h = bitmap.getHeight();
final WritableRaster wr = bitmap.getData();
bitfloat = wr.getPixels(0, 0, w, h, bitfloat);
// do processing here
wr.setPixels(0, 0, w, h, bitfloat);
Note also that if you're planning to display this image, you should really copy it to a screen-compatible type; ImageIO seldom returns those.
I'm doing Google searches for FloatDCT_2D to see what package/library it's in, and it looks like there are several references to various sources, such as "edu.emory.mathcs.jtransforms.dct.FloatDCT_2D". Without knowing what custom library you're using, it's really hard to give you any advice on how to perform the transform.
My guess is in general, that you should read the input data from the original raster, perform the transform on the original data, then write the output to a new raster.
However, your last statement all on it's own looks odd... Raster.createRaster() looks like you're calling a static method with no parameters on a class you've never referenced in the code you've posted. How is that generating data for your bitmap??? Even in pseudo code, you would need to take the results of your transform and build the resultant raster.
Related
I am working with OpenCV in java, but I don't understand part of a class that loads pictures in java:
public class ImageProcessor {
public BufferedImage toBufferedImage(Mat matrix){
int type = BufferedImage.TYPE_BYTE_GRAY;
if ( matrix.channels() > 1 ) {
type = BufferedImage.TYPE_3BYTE_BGR;
}
int bufferSize = matrix.channels()*matrix.cols()*matrix.rows();
byte [] buffer = new byte[bufferSize];
matrix.get(0,0,buffer); // get all the pixels
BufferedImage image = new BufferedImage(matrix.cols(),matrix.rows(),type);
final byte[] targetPixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
System.arraycopy(buffer, 0, targetPixels, 0, buffer.length);
return image;
}
Main class sends a Mat object to this class.
The result sends BufferedImage but I don't understand targetPixels because this class doesn't use it somewhere else. But whenever I comment targetPixels and System.arraycopy, result shows black picture.
I want to know what's targetPixels - and what does it do?
The point is less about that array, but the methods that get you there.
You start here: getRaster(). That is supposed to return a WritableRaster ... and so on.
That class is using from getDataBuffer() from the Raster class; and there we find:
A class representing a rectangular array of pixels. A Raster encapsulates a DataBuffer that stores the sample values and a SampleModel that describes how to locate a given sample value in a DataBuffer.
What happens in essence here: that Image object, in the end has an array of bytes that are supposed to contain certain information.
That assignment:
final byte[] targetPixels = ...
retrieves a reference to that internal data; and then arrayCopy() is used to copy data into that array.
For the record: that doesn't look like a good approach - as it only works when this copy action really affects the internals of that Image object. But what if that last call getData() would create a copy of the internal data?
In other words: this code tries to gain direct access to internal data of some object; and then manipulate that internal data.
Even if that works today, it is not robust; and might break easily in the future. The other thing: note that this code does a unconditional cast (DataBufferByte). That code throws a RuntimeException if the the buffer doesn't have exactly that type.
Probably that is "all fine"; since it is all related to "AWT" classes which probably exist for ages; and will not change at all any more. But as said; this code has various potential issues.
targetPixels is the main image data (i.e. the pixels) of your new image. The actual image is created when the pixeldata is copied from buffer into targetPixels.
targetPixels is the array of bytes from your newly created BufferedImage, those bytes are empty thus you need to copy the bytes from the buffer to it with System.arraycopy.. :)
I have a program that requires going through BufferedImages pixel by pixel quite often. Normally efficiency wouldn't matter enough for me to care, but I really want every millisecond I can get.
As an example, right now, the fastest way I've found of isolating the red channel in an image looks like this:
int[] rgb = image.getRGB(0, 0, img.getWidth(), img.getHeight(), null, 0, img.getWidth());
for(int i = 0; i < rgb.length; i ++)
rgb[i] = rgb[i] & 0xFFFF0000;
image.setRGB(0, 0, img.getWidth(), img.getHeight(), rgb, 0, img.getWidth());
Which means it's going through the image once to populate the array, again to apply the filter, and finally a third time to update the pixels. Also considering that any pixel with an alpha channel of 0 gets completely zeroed out, It must be going through it at least one more time as well.
I've also tried using the individual pixel versions of getRGB() and setRGB() in a more traditional nested for loop, but that's even slower (though probably takes far less RAM since it doesn't have the int[]).
This is the kind of problem that Iterable types solve, but I can't find any way of applying that principle to images. For this project, I'm okay with total hacks that aren't "best practices" if it works.
Is there any way of iterating over the raw data of a buffered image?
The fastest way is to access the DataBuffer because you have a direct access to the pixels value, but you have to handle all the cases/types:
public void Test(BufferedImage img)
{
switch ( img.getType() )
{
case BufferedImage.TYPE_BYTE_GRAY :
case BufferedImage.TYPE_3BYTE_BGR :
byte[] bufferbyte = ((DataBufferByte)img.getRaster().getDataBuffer()).getData() ;
//...
break ;
case BufferedImage.TYPE_USHORT_GRAY :
short[] buffershort = ((DataBuffer)img.getRaster().getDataBuffer()).getData() ;
//...
break ;
//Other cases
}
}
I'm currently following a series on Java game development from scratch. I understand most java and oop concepts but have very little experience when dealing with graphics and hardware acceleration.
The lines of code that I am questioning are:
private BufferedImage image = new BufferedImage(WIDTH, HEIGHT, BufferedImage.TYPE_INT_RGB);
private int[] pixels = ((DataBufferInt) image.getRaster().getDataBuffer()).getData();
The BufferedImage "image" variable is always the variable being drawn to the screen through a render method. Usually something like:
public void render() {
BufferStrategy bs = this.getBufferStrategy;
if(bs == null) { this.createBufferStrategy(3); return; }
Graphics g = bs.getDrawGraphics();
g.drawImage(image, 0, 0, WIDTH, HEIGHT, null);
g.dispose();
bs.show();
}
I understand the array of pixels contains every pixel within the BufferedImage, however, it seems as though everytime that array is filled with values it directly effects the contents of the "image" variable. There is never a method used to copy the pixels array values into the image.
Are these varibales actually linked in such a way? Does the use of:
private int[] pixels = ((DataBufferInt) image.getRaster().getDataBuffer()).getData();
create an automated link between the image and the array being created in the code above? Maybe I am going cray and just missed something but I have reviewed the code several times and not once is the "image" varibale manipulated after it's initial creation. (besides being rendered to the screen of course.) It's always the array "pixels" just being filled with different values that causes the change in the rendered image.
Some insight on this would be wonderful. Thank you in advance!
Why don't you call
image.getData()
instead of
private int[] pixels = ((DataBufferInt) image.getRaster().getDataBuffer()).getData();
image.getData() returns a copy of the Raster. getRaster() returns a WriteableRaster with the ability to modify pixels. I'm guessing but getRaster() probably returns a child of the image Raster and therefore is writeable if you modify the array. Try image.getData() to see if it works. If not, post back here and I'll take a closer look.
I looked into this further. The source code that comes with the JDK shows that image.getRaster().getDataBuffer().getData() returns the source data array. image.getData() indeed returns a copy. If the image is modified, the data in getData() will not be modified.
You can call getPixels on the returned Raster:
public int[] getPixels(int x,
int y,
int w,
int h,
int[] iArray)
Returns an int array containing all samples for a rectangle of pixels,
one sample per array element. An ArrayIndexOutOfBoundsException may be
thrown if the coordinates are not in bounds. However, explicit bounds
checking is not guaranteed.
Use int[] pixels = ((DataBufferInt) image.getRaster().getDataBuffer()).getData(); instead of image.getData(). image.getData() just returns a copy of the image data where image.getRaster() returns the original pixel array allowing to actually write pixels to it.
I am using JNA. and i am getting byte array of raw data from my c++ method.
Now i am stuck how to get buffered image in java using this raw data byte array.
I had tried few things to get it as tiff image but i dint get success.
this are the code i tried so far.
here my byte array contains data for 16 bit gray scale image. i get this data from x-sensor device. and now i need to get image from this byte array.
FIRST TRY
byte[] byteArray = myVar1.getByteArray(0, 3318000);//array of raw data
ImageInputStream stream1=ImageIO.createImageInputStream(newByteArrayInputStream(byteArray));
ByteArraySeekableStream stream=new ByteArraySeekableStream(byteArray,0,3318000);
BufferedImage bi = ImageIO.read(stream);
SECOND TRY
SeekableStream stream = new ByteArraySeekableStream(byteArray);
String[] names = ImageCodec.getDecoderNames(stream);
ImageDecoder dec = ImageCodec.createImageDecoder(names[0], stream, null);
//at this line get the error ArrayIndexOutOfBoundsException: 0
RenderedImage im = dec.decodeAsRenderedImage();
I think i am missing here.
As my array is containing raw data ,it does not containthen header for tiff image.
m i right?
if yes then how to provide this header in byte array. and eventually how to get image from this byte array?
to test that i am getting proper byte array from my native method i stored this byte array as .raw file and after opening this raw file in ImageJ software it sows me correct image so my raw data is correct.
The only thing i need is that how to convert my raw byte array in image byte array?
Here is what I am using to convert raw pixel data to a BufferedImage. My pixels are signed 16-bit:
public static BufferedImage short2Buffered(short[] pixels, int width, int height) throws IllegalArgumentException {
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_USHORT_GRAY);
short[] imgData = ((DataBufferShort)image.getRaster().getDataBuffer()).getData();
System.arraycopy(pixels, 0, imgData, 0, pixels.length);
return image;
}
I'm then using JAI to encode the resulting image. Tell me if you need the code as well.
EDIT: I have greatly improved the speed thanks to #Brent Nash answer on a similar question.
EDIT: For the sake of completeness, here is the code for unsigned 8 bits:
public static BufferedImage byte2Buffered(byte[] pixels, int width, int height) throws IllegalArgumentException {
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
byte[] imgData = ((DataBufferByte)image.getRaster().getDataBuffer()).getData();
System.arraycopy(pixels, 0, imgData, 0, pixels.length);
return image;
}
Whether or not the byte array contains literally just pixel data or a structured image file such as TIFF etc really depends on where you got it from. It's impossible to answer that from the information provided.
However, if it does contain a structured image file, then you can generally:
wrap a ByteArrayInputStream around it
pass that stream to ImageIO.read()
If you just have literally raw pixel data, then you have a couple of main options:
'manually' get that pixel data so that it is in an int array with one int per pixel in ARGB format (the ByteBuffer and IntBuffer classes can help you with twiddling about with bytes)
create a blank BufferedImage, then call its setRGB() method to set the actual pixel contents from your previously prepared int array
I think the above is easiest if you know what you're doing with the bits and bytes. However, in principle, you should be able to do the following:
find a suitable WritableRaster.create... method method that will create a WritableRaster object wrapped around your data
pass that WritableRaster into the relevant BufferedImage constructor to create your image.
I have a method converting BufferedImages who's type is TYPE_CUSTOM to TYPE_INT_RGB. I am using the following code, however I would really like to find a faster way of doing this.
BufferedImage newImg = new BufferedImage(
src.getWidth(),
src.getHeight(),
BufferedImage.TYPE_INT_RGB);
ColorConvertOp op = new ColorConvertOp(null);
op.filter(src, newImg);
It works fine, however it's quite slow and I am wondering if there is a faster way to do this conversion.
ColorModel Before Conversion:
ColorModel: #pixelBits = 24 numComponents = 3 color space = java.awt.color.ICC_ColorSpace#1c92586f transparency = 1 has alpha = false isAlphaPre = false
ColorModel After Conversion:
DirectColorModel: rmask=ff0000 gmask=ff00 bmask=ff amask=0
Thanks!
Update:
Turns out working with the raw pixel data was the best way. Since the TYPE_CUSTOM was actually RGB converting it manually is simple and is about 95% faster than ColorConvertOp.
public static BufferedImage makeCompatible(BufferedImage img) throws IOException {
// Allocate the new image
BufferedImage dstImage = new BufferedImage(img.getWidth(), img.getHeight(), BufferedImage.TYPE_INT_RGB);
// Check if the ColorSpace is RGB and the TransferType is BYTE.
// Otherwise this fast method does not work as expected
ColorModel cm = img.getColorModel();
if ( cm.getColorSpace().getType() == ColorSpace.TYPE_RGB && img.getRaster().getTransferType() == DataBuffer.TYPE_BYTE ) {
//Allocate arrays
int len = img.getWidth()*img.getHeight();
byte[] src = new byte[len*3];
int[] dst = new int[len];
// Read the src image data into the array
img.getRaster().getDataElements(0, 0, img.getWidth(), img.getHeight(), src);
// Convert to INT_RGB
int j = 0;
for ( int i=0; i<len; i++ ) {
dst[i] = (((int)src[j++] & 0xFF) << 16) |
(((int)src[j++] & 0xFF) << 8) |
(((int)src[j++] & 0xFF));
}
// Set the dst image data
dstImage.getRaster().setDataElements(0, 0, img.getWidth(), img.getHeight(), dst);
return dstImage;
}
ColorConvertOp op = new ColorConvertOp(null);
op.filter(img, dstImage);
return dstImage;
}
BufferedImages are painfully slow. I got a solution but I'm not sure you will like it. The fastest way to process and convert buffered images is to extract the raw data array from inside the BufferedImage. You do that by calling buffImg.getRaster() and converting it into the specific raster. Then call raster.getDataStorage(). Once you have access to the raw data it is possible to write fast image processing code without all the abstraction in BufferedImages slowing it down. This technique also requires an in depth understanding of image formats and some reverse engineering on your part. This is the only way I have been able to get image processing code to run fast enough for my applications.
Example:
ByteInterleavedRaster srcRaster = (ByteInterleavedRaster)src.getRaster();
byte srcData[] = srcRaster.getDataStorage();
IntegerInterleavedRaster dstRaster = (IntegerInterleavedRaster)dst.getRaster();
int dstData[] = dstRaster.getDataStorage();
dstData[0] = srcData[0] << 16 | srcData[1] << 8 | srcData[2];
or something like that. Expect compiler errors warning you not to access low level rasters like that. The only place I have had issues with this technique is inside of applets where an access violation will occur.
I've found rendering using Graphics.drawImage() instead of ColorConvertOp 50 times faster. I can only assume that drawImage() is GPU accelerated.
i.e this is really slow, like 50ms a go for 100x200 rectangles
public void BufferdImage convert(BufferedImage input) {
BufferedImage output= new BufferedImage(input.getWidht(), input.getHeight(), BufferedImage.TYPE_BYTE_BINARY, CUSTOM_PALETTE);
ColorConvertOp op = new ColorConvertOp(input.getColorModel().getColorSpace(),
output.getColorModel().getColorSpace());
op.filter(input, output);
return output;
}
i.e however this registers < 1ms for same inputs
public void BufferdImage convert(BufferedImage input) {
BufferedImage output= new BufferedImage(input.getWidht(), input.getHeight(), BufferedImage.TYPE_BYTE_BINARY, CUSTOM_PALETTE);
Graphics graphics = output.getGraphics();
graphics.drawImage(input, 0, 0, null);
graphics.dispose();
return output;
}
Have you tried supplying any RenderingHints? No guarantees, but using
ColorConvertOp op = new ColorConvertOp(new RenderingHints(
RenderingHints.KEY_COLOR_RENDERING,
RenderingHints.VALUE_COLOR_RENDER_SPEED));
rather than the null in your code snippet might speed it up somewhat.
I suspect the problem might be that ColorConvertOp() works pixel-by-pixel (guaranteed to be "slow").
Q: Is it possible for you to use gc.createCompatibleImage()?
Q: Is your original bitmap true color, or does it use a colormap?
Q: Failing all else, would you be agreeable to writing a JNI interface? Either to your own, custom C code, or to an external library such as ImageMagick?
If you have JAI installed then you might try uninstalling it, if you can, or otherwise look for some way to disable codecLib when loading JPEG. In a past life I had similar issues (http://www.java.net/node/660804) and ColorConvertOp was the fastest at the time.
As I recall the fundamental problem is that Java2D is not at all optimized for TYPE_CUSTOM images in general. When you install JAI it comes with codecLib which has a decoder that returns TYPE_CUSTOM and gets used instead of the default. The JAI list may be able to provide more help, it's been several years.
maybe try this:
Bitmap source = Bitmap.create(width, height, RGB_565);//don't remember exactly...
Canvas c = new Canvas(source);
// then
c.draw(bitmap, 0, 0);
Then the source bitmap will be modified.
Later you can do:
onDraw(Canvas canvas){
canvas.draw(source, rectSrs,rectDestination, op);
}
if you can manage always reuse the bitmap so you be able to get better performance. As well you can use other canvas functions to draw your bitmap