I am using JNA. and i am getting byte array of raw data from my c++ method.
Now i am stuck how to get buffered image in java using this raw data byte array.
I had tried few things to get it as tiff image but i dint get success.
this are the code i tried so far.
here my byte array contains data for 16 bit gray scale image. i get this data from x-sensor device. and now i need to get image from this byte array.
FIRST TRY
byte[] byteArray = myVar1.getByteArray(0, 3318000);//array of raw data
ImageInputStream stream1=ImageIO.createImageInputStream(newByteArrayInputStream(byteArray));
ByteArraySeekableStream stream=new ByteArraySeekableStream(byteArray,0,3318000);
BufferedImage bi = ImageIO.read(stream);
SECOND TRY
SeekableStream stream = new ByteArraySeekableStream(byteArray);
String[] names = ImageCodec.getDecoderNames(stream);
ImageDecoder dec = ImageCodec.createImageDecoder(names[0], stream, null);
//at this line get the error ArrayIndexOutOfBoundsException: 0
RenderedImage im = dec.decodeAsRenderedImage();
I think i am missing here.
As my array is containing raw data ,it does not containthen header for tiff image.
m i right?
if yes then how to provide this header in byte array. and eventually how to get image from this byte array?
to test that i am getting proper byte array from my native method i stored this byte array as .raw file and after opening this raw file in ImageJ software it sows me correct image so my raw data is correct.
The only thing i need is that how to convert my raw byte array in image byte array?
Here is what I am using to convert raw pixel data to a BufferedImage. My pixels are signed 16-bit:
public static BufferedImage short2Buffered(short[] pixels, int width, int height) throws IllegalArgumentException {
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_USHORT_GRAY);
short[] imgData = ((DataBufferShort)image.getRaster().getDataBuffer()).getData();
System.arraycopy(pixels, 0, imgData, 0, pixels.length);
return image;
}
I'm then using JAI to encode the resulting image. Tell me if you need the code as well.
EDIT: I have greatly improved the speed thanks to #Brent Nash answer on a similar question.
EDIT: For the sake of completeness, here is the code for unsigned 8 bits:
public static BufferedImage byte2Buffered(byte[] pixels, int width, int height) throws IllegalArgumentException {
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
byte[] imgData = ((DataBufferByte)image.getRaster().getDataBuffer()).getData();
System.arraycopy(pixels, 0, imgData, 0, pixels.length);
return image;
}
Whether or not the byte array contains literally just pixel data or a structured image file such as TIFF etc really depends on where you got it from. It's impossible to answer that from the information provided.
However, if it does contain a structured image file, then you can generally:
wrap a ByteArrayInputStream around it
pass that stream to ImageIO.read()
If you just have literally raw pixel data, then you have a couple of main options:
'manually' get that pixel data so that it is in an int array with one int per pixel in ARGB format (the ByteBuffer and IntBuffer classes can help you with twiddling about with bytes)
create a blank BufferedImage, then call its setRGB() method to set the actual pixel contents from your previously prepared int array
I think the above is easiest if you know what you're doing with the bits and bytes. However, in principle, you should be able to do the following:
find a suitable WritableRaster.create... method method that will create a WritableRaster object wrapped around your data
pass that WritableRaster into the relevant BufferedImage constructor to create your image.
Related
I am trying to implement a solution that would be able to capture a picture from the phone's camera, then operate the picture (do something with it) and repeat this process N times, quickly.
I have achieved this using the imageCapture.takePicture method, but when trying to implement the same process for N pictures, the onCaptureSuccess method is being called every ~500ms (on a solid Samsung device). The process of capturing and saving the picture lasts too long for me. I need it to be quicker than 500ms.
I was looking to implement it using the imageAnalyzer class, and used code similar to this:
private class CameraAnalyzer implements ImageAnalysis.Analyzer {
#Override
public void analyze(#NonNull ImageProxy image) {
ByteBuffer bb = image.getPlanes()[0].getBuffer();
byte[] buf = new byte[bb.remaining()];
bb.get(buf);
//raw - data container
raw = buf;
//runnable - operate the picture
runnable.run();
image.close();
}
}
But I am receiving NULL for buf and the picture is always empty. bb.rewind() did not help as well.
After being advised that the picture is coming in RAW format, and thus need to convert it to a Bitmap, I have done it with this code:
ImageProxy.PlaneProxy[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
int pixelStride = planes[0].getPixelStride();
int rowStride = planes[0].getRowStride();
int rowPadding = rowStride - pixelStride * image.getWidth();
Bitmap bitmap = Bitmap.createBitmap(image.getWidth()+rowPadding/pixelStride,
image.getHeight(), Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(buffer);
But while executing the copyPixelsFromBuffer I am encountering this issue:
E/AndroidRuntime: FATAL EXCEPTION: main
Process: com.strujomeri, PID: 24466
java.lang.RuntimeException: Buffer not large enough for pixels
How can I get the picture I want in imageAnalyzer, and also have it's content
in byte[] format to do with it what I want ?
The default image format is YUV_420_888 and as your buffer is only one plane (the Luminance(Y) plane ) the buffer is of pixels that are only a single byte in size.
copyPixelsFromBuffer assumes that the buffer is in the same colour space as the bitmap, so as you set the bitmap format to ARGB_8888 then it is looking for 4 bytes per pixel not 1 byte per pixel you are giving it.
I've not used this but this page has a ImageProxy.toBitmap example to convert a YUV to Bitmap via a Jpeg (if you just wanted a Jpeg you could skip the last step)
I've also seen another method not via Jpeg but to directly change the colorspace of YUV to a bitmap.
You can of course change to the only other colour space ImageAnalysis supports of ImageAnalysis.OUTPUT_IMAGE_FORMAT_RGBA_8888 which takes longer to be captured because it needs converting and is still not the right format for a Bitmap
I am working with OpenCV in java, but I don't understand part of a class that loads pictures in java:
public class ImageProcessor {
public BufferedImage toBufferedImage(Mat matrix){
int type = BufferedImage.TYPE_BYTE_GRAY;
if ( matrix.channels() > 1 ) {
type = BufferedImage.TYPE_3BYTE_BGR;
}
int bufferSize = matrix.channels()*matrix.cols()*matrix.rows();
byte [] buffer = new byte[bufferSize];
matrix.get(0,0,buffer); // get all the pixels
BufferedImage image = new BufferedImage(matrix.cols(),matrix.rows(),type);
final byte[] targetPixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
System.arraycopy(buffer, 0, targetPixels, 0, buffer.length);
return image;
}
Main class sends a Mat object to this class.
The result sends BufferedImage but I don't understand targetPixels because this class doesn't use it somewhere else. But whenever I comment targetPixels and System.arraycopy, result shows black picture.
I want to know what's targetPixels - and what does it do?
The point is less about that array, but the methods that get you there.
You start here: getRaster(). That is supposed to return a WritableRaster ... and so on.
That class is using from getDataBuffer() from the Raster class; and there we find:
A class representing a rectangular array of pixels. A Raster encapsulates a DataBuffer that stores the sample values and a SampleModel that describes how to locate a given sample value in a DataBuffer.
What happens in essence here: that Image object, in the end has an array of bytes that are supposed to contain certain information.
That assignment:
final byte[] targetPixels = ...
retrieves a reference to that internal data; and then arrayCopy() is used to copy data into that array.
For the record: that doesn't look like a good approach - as it only works when this copy action really affects the internals of that Image object. But what if that last call getData() would create a copy of the internal data?
In other words: this code tries to gain direct access to internal data of some object; and then manipulate that internal data.
Even if that works today, it is not robust; and might break easily in the future. The other thing: note that this code does a unconditional cast (DataBufferByte). That code throws a RuntimeException if the the buffer doesn't have exactly that type.
Probably that is "all fine"; since it is all related to "AWT" classes which probably exist for ages; and will not change at all any more. But as said; this code has various potential issues.
targetPixels is the main image data (i.e. the pixels) of your new image. The actual image is created when the pixeldata is copied from buffer into targetPixels.
targetPixels is the array of bytes from your newly created BufferedImage, those bytes are empty thus you need to copy the bytes from the buffer to it with System.arraycopy.. :)
I'm trying to convert an .BMP pic into byte[] or byte[][]. I already try 2 solutions and i get strange results
First :
Bitmap bitmap = BitmapFactory.decodeResource(getResources(),R.drawable.qr);
int bytes = bitmap.getByteCount();
ByteBuffer buffer = ByteBuffer.allocate(bytes);
bitmap.copyPixelsToBuffer(buffer);
byte[] byt = buffer.array();
In the byt i get array with over 36 000 element and all have -1 value.
My second try was :
Bitmap bitmap = BitmapFactory.decodeResource(getResources(),R.drawable.qr);
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.PNG, 100,stream);
byte[] byteArrat = stream.toByteArray();
But now i have array with over 1200 elements which look random ( value form -180 to 100 )
Is there any possible to mage bmp to array which have 0 for WHITE color and 1 for BLACK ? Already i'm stock and don't know what to do. The pic :
PNG is an encoded format so it won't be straightforward to analyze that as an array, so go with your first instinct.
This image is 210x210, so you should expect 44100 different color values. Android doesn't know that it's black and white, so you're going to get full color values for each pixel and will need to convert those to simple 0s and 1s.
See the getPixels method which might be more straightforward for your use case. It will dump the pixel colors into an int[] argb array where you can then simplify into black vs white.
I get the pixels from BufferedImage using the method getRGB(). The pixels are stored in array called data[]. After some manipulation on data array, I need to create a BufferedImage again so that I can pass it to a module which will display the modified image, from this data array, but I am stuck with it.
I get the pixels from the BufferedImage using the method getRGB(). The
pixels are stored in array called data[].
Note that this can possibly be terribly slow. If your BufferedImage supports it, you may want to instead access the underlying int[] and directly copy/read the pixels from there.
For example, to fastly copy your data[] into the underlying int[] of a new BufferedImage:
BufferedImage bi = new BufferedImage( w, h, BufferedImage.TYPE_INT_ARGB );
final int[] a = ( (DataBufferInt) res.getRaster().getDataBuffer() ).getData();
System.arraycopy(data, 0, a, 0, data.length);
Of course you want to make sure that your data[] contains pixels in the same representation as your BufferedImage (ARGB in this example).
BufferedImage bufferedImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
Then set the pixels again.
bufferedImage.setRGB(x, y, your_value);
PS: as stated in the comments, please use the answer from #TacticalCoder
You can set the RGB (int) values for the pixels in the new image using the setRGB methods.
I'm trying to take a BufferedImage, apply a Fourier transform (using jtransforms), and write the data back to the BufferedImage. But I'm stuck creating a new Raster to set the results back, am I missing something here?
BufferedImage bitmap;
float [] bitfloat = null;
bitmap = ImageIO.read(new File("filename"));
FloatDCT_2D dct = new FloatDCT_2D(bitmap.getWidth(),bitmap.getHeight());
bitfloat = bitmap.getData().getPixels(0, 0, bitmap.getWidth(), bitmap.getHeight(), bitfloat);
dct.forward(bitfloat, false);
But I'm stumped trying to finish off this line, what should I give the createRaster function? The javadocs for createRaster make little sense to me:
bitmap.setData(Raster.createRaster(`arg1`, `arg2`, `arg3`));
I'm starting to wonder if a float array is even necessary, but there aren't many examples of jtransforms out there.
Don't create a new Raster. Use WritableRaster.setPixels(int,int,int,int,float[]) to write the array back to the image.
final int w = bitmap.getWidth();
final int h = bitmap.getHeight();
final WritableRaster wr = bitmap.getData();
bitfloat = wr.getPixels(0, 0, w, h, bitfloat);
// do processing here
wr.setPixels(0, 0, w, h, bitfloat);
Note also that if you're planning to display this image, you should really copy it to a screen-compatible type; ImageIO seldom returns those.
I'm doing Google searches for FloatDCT_2D to see what package/library it's in, and it looks like there are several references to various sources, such as "edu.emory.mathcs.jtransforms.dct.FloatDCT_2D". Without knowing what custom library you're using, it's really hard to give you any advice on how to perform the transform.
My guess is in general, that you should read the input data from the original raster, perform the transform on the original data, then write the output to a new raster.
However, your last statement all on it's own looks odd... Raster.createRaster() looks like you're calling a static method with no parameters on a class you've never referenced in the code you've posted. How is that generating data for your bitmap??? Even in pseudo code, you would need to take the results of your transform and build the resultant raster.