I am working with OpenCV in java, but I don't understand part of a class that loads pictures in java:
public class ImageProcessor {
public BufferedImage toBufferedImage(Mat matrix){
int type = BufferedImage.TYPE_BYTE_GRAY;
if ( matrix.channels() > 1 ) {
type = BufferedImage.TYPE_3BYTE_BGR;
}
int bufferSize = matrix.channels()*matrix.cols()*matrix.rows();
byte [] buffer = new byte[bufferSize];
matrix.get(0,0,buffer); // get all the pixels
BufferedImage image = new BufferedImage(matrix.cols(),matrix.rows(),type);
final byte[] targetPixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
System.arraycopy(buffer, 0, targetPixels, 0, buffer.length);
return image;
}
Main class sends a Mat object to this class.
The result sends BufferedImage but I don't understand targetPixels because this class doesn't use it somewhere else. But whenever I comment targetPixels and System.arraycopy, result shows black picture.
I want to know what's targetPixels - and what does it do?
The point is less about that array, but the methods that get you there.
You start here: getRaster(). That is supposed to return a WritableRaster ... and so on.
That class is using from getDataBuffer() from the Raster class; and there we find:
A class representing a rectangular array of pixels. A Raster encapsulates a DataBuffer that stores the sample values and a SampleModel that describes how to locate a given sample value in a DataBuffer.
What happens in essence here: that Image object, in the end has an array of bytes that are supposed to contain certain information.
That assignment:
final byte[] targetPixels = ...
retrieves a reference to that internal data; and then arrayCopy() is used to copy data into that array.
For the record: that doesn't look like a good approach - as it only works when this copy action really affects the internals of that Image object. But what if that last call getData() would create a copy of the internal data?
In other words: this code tries to gain direct access to internal data of some object; and then manipulate that internal data.
Even if that works today, it is not robust; and might break easily in the future. The other thing: note that this code does a unconditional cast (DataBufferByte). That code throws a RuntimeException if the the buffer doesn't have exactly that type.
Probably that is "all fine"; since it is all related to "AWT" classes which probably exist for ages; and will not change at all any more. But as said; this code has various potential issues.
targetPixels is the main image data (i.e. the pixels) of your new image. The actual image is created when the pixeldata is copied from buffer into targetPixels.
targetPixels is the array of bytes from your newly created BufferedImage, those bytes are empty thus you need to copy the bytes from the buffer to it with System.arraycopy.. :)
Related
I am trying to implement a solution that would be able to capture a picture from the phone's camera, then operate the picture (do something with it) and repeat this process N times, quickly.
I have achieved this using the imageCapture.takePicture method, but when trying to implement the same process for N pictures, the onCaptureSuccess method is being called every ~500ms (on a solid Samsung device). The process of capturing and saving the picture lasts too long for me. I need it to be quicker than 500ms.
I was looking to implement it using the imageAnalyzer class, and used code similar to this:
private class CameraAnalyzer implements ImageAnalysis.Analyzer {
#Override
public void analyze(#NonNull ImageProxy image) {
ByteBuffer bb = image.getPlanes()[0].getBuffer();
byte[] buf = new byte[bb.remaining()];
bb.get(buf);
//raw - data container
raw = buf;
//runnable - operate the picture
runnable.run();
image.close();
}
}
But I am receiving NULL for buf and the picture is always empty. bb.rewind() did not help as well.
After being advised that the picture is coming in RAW format, and thus need to convert it to a Bitmap, I have done it with this code:
ImageProxy.PlaneProxy[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
int pixelStride = planes[0].getPixelStride();
int rowStride = planes[0].getRowStride();
int rowPadding = rowStride - pixelStride * image.getWidth();
Bitmap bitmap = Bitmap.createBitmap(image.getWidth()+rowPadding/pixelStride,
image.getHeight(), Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(buffer);
But while executing the copyPixelsFromBuffer I am encountering this issue:
E/AndroidRuntime: FATAL EXCEPTION: main
Process: com.strujomeri, PID: 24466
java.lang.RuntimeException: Buffer not large enough for pixels
How can I get the picture I want in imageAnalyzer, and also have it's content
in byte[] format to do with it what I want ?
The default image format is YUV_420_888 and as your buffer is only one plane (the Luminance(Y) plane ) the buffer is of pixels that are only a single byte in size.
copyPixelsFromBuffer assumes that the buffer is in the same colour space as the bitmap, so as you set the bitmap format to ARGB_8888 then it is looking for 4 bytes per pixel not 1 byte per pixel you are giving it.
I've not used this but this page has a ImageProxy.toBitmap example to convert a YUV to Bitmap via a Jpeg (if you just wanted a Jpeg you could skip the last step)
I've also seen another method not via Jpeg but to directly change the colorspace of YUV to a bitmap.
You can of course change to the only other colour space ImageAnalysis supports of ImageAnalysis.OUTPUT_IMAGE_FORMAT_RGBA_8888 which takes longer to be captured because it needs converting and is still not the right format for a Bitmap
I want to pass a Pointer to a byte[] array, to a native
routine. I do not have any capability to modify the the native code. I can merely utilize it.
The method expects an image, in an LPSTR data type. I've looked at other C code that consumes the native routine.
They appear to be passing in an array of bytes.
So, my process is to obtain a buffered image object, get the corresponding byte array for the image, and attempt to pass that to the native routine.
I've tried to pass the array directly, I've tried to write the array to Memory, and I've tried to pass a bytebuffer as well. I still keep getting an 'Invalid memory access' exception.
Using the Memory Class:
byte[] imageBytes = getImageBytes(image);
Pointer imagePointer = new Memory(imageBytes.length);
imagePointer.write(0, imageBytes, 0, imageBytes.length);
nativeInterface.SendImageData(port, imagePointer, x, y, w, h);
Using the ByteBuffer:
ByteBuffer bBuffer = ByteBuffer.allocateDirect(imageBytes.length);
bBuffer.put(imageBytes);
Pointer imagePointer = Native.getDirectBufferPointer(bBuffer);
return cspStatInterface.SendImageData(port, imagePointer, x, y, w, h);
Passing Byte Array Directly:
byte[] imageBytes = getImageBytes(image);
nativeInterface.SendImageData(port, imageBytes, x, y, w, h);
I've been looking everywhere, and I can't seem to find a solution. I've verified, that the byte array is filled with appropriate data as well. Am I not properly constructing the Pointer? Is there a better way to create the pointer?
I'm receiving a Bitmap in byte array through socket and I read it and then I want to set it os.toByteArray as ImageView in my application. The code I use is:
try {
//bmp = BitmapFactory.decodeByteArray(result, 0, result.length);
bitmap_tmp = Bitmap.createBitmap(540, 719, Bitmap.Config.ARGB_8888);
ByteBuffer buffer = ByteBuffer.wrap(os.toByteArray());
bitmap_tmp.copyPixelsFromBuffer(buffer);
Log.d("Server",result+"Length:"+result.length);
runOnUiThread(new Runnable() {
#Override
public void run() {
imageView.setImageBitmap(bitmap_tmp);
}
});
return bmp;
} finally {
}
When I run my application and start receiving Byte[] and expect that ImageView is changed, it's not.
LogCat says:
java.lang.RuntimeException: Buffer not large enough for pixels at
android.graphics.Bitmap.copyPixelsFromBuffer
I searched in similar questions but couldn't find a solution to my problem.
Take a look at the source (version 2.3.4_r1, last time Bitmap was updated on Grepcode prior to 4.4) for Bitmap::copyPixelsFromBuffer()
The wording of the error is a bit unclear, but the code clarifies-- it means that your buffer is calculated as not having enough data to fill the pixels of your bitmap.
This is (possibly) because they use the buffer's remaining() method to figure the capacity of the buffer, which takes into account the current value of its position attribute. If you call rewind() on your buffer before you invoke copyFromPixels(), you might see the runtime exception disappear. I say 'might' because the ByteBuffer::wrap() method should set the position attribute value to zero, removing the need to call rewind, but judging by similar questions and my own experience resetting the position explicitly may do the trick.
Try
ByteBuffer buffer = ByteBuffer.wrap(os.toByteArray());
buffer.rewind();
bitmap_tmp.copyPixelsFromBuffer(buffer);
The buffer size should be exactly 1553040B (assuming bitmap's height, width and 32bit to encode each color).
I am using JNA. and i am getting byte array of raw data from my c++ method.
Now i am stuck how to get buffered image in java using this raw data byte array.
I had tried few things to get it as tiff image but i dint get success.
this are the code i tried so far.
here my byte array contains data for 16 bit gray scale image. i get this data from x-sensor device. and now i need to get image from this byte array.
FIRST TRY
byte[] byteArray = myVar1.getByteArray(0, 3318000);//array of raw data
ImageInputStream stream1=ImageIO.createImageInputStream(newByteArrayInputStream(byteArray));
ByteArraySeekableStream stream=new ByteArraySeekableStream(byteArray,0,3318000);
BufferedImage bi = ImageIO.read(stream);
SECOND TRY
SeekableStream stream = new ByteArraySeekableStream(byteArray);
String[] names = ImageCodec.getDecoderNames(stream);
ImageDecoder dec = ImageCodec.createImageDecoder(names[0], stream, null);
//at this line get the error ArrayIndexOutOfBoundsException: 0
RenderedImage im = dec.decodeAsRenderedImage();
I think i am missing here.
As my array is containing raw data ,it does not containthen header for tiff image.
m i right?
if yes then how to provide this header in byte array. and eventually how to get image from this byte array?
to test that i am getting proper byte array from my native method i stored this byte array as .raw file and after opening this raw file in ImageJ software it sows me correct image so my raw data is correct.
The only thing i need is that how to convert my raw byte array in image byte array?
Here is what I am using to convert raw pixel data to a BufferedImage. My pixels are signed 16-bit:
public static BufferedImage short2Buffered(short[] pixels, int width, int height) throws IllegalArgumentException {
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_USHORT_GRAY);
short[] imgData = ((DataBufferShort)image.getRaster().getDataBuffer()).getData();
System.arraycopy(pixels, 0, imgData, 0, pixels.length);
return image;
}
I'm then using JAI to encode the resulting image. Tell me if you need the code as well.
EDIT: I have greatly improved the speed thanks to #Brent Nash answer on a similar question.
EDIT: For the sake of completeness, here is the code for unsigned 8 bits:
public static BufferedImage byte2Buffered(byte[] pixels, int width, int height) throws IllegalArgumentException {
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
byte[] imgData = ((DataBufferByte)image.getRaster().getDataBuffer()).getData();
System.arraycopy(pixels, 0, imgData, 0, pixels.length);
return image;
}
Whether or not the byte array contains literally just pixel data or a structured image file such as TIFF etc really depends on where you got it from. It's impossible to answer that from the information provided.
However, if it does contain a structured image file, then you can generally:
wrap a ByteArrayInputStream around it
pass that stream to ImageIO.read()
If you just have literally raw pixel data, then you have a couple of main options:
'manually' get that pixel data so that it is in an int array with one int per pixel in ARGB format (the ByteBuffer and IntBuffer classes can help you with twiddling about with bytes)
create a blank BufferedImage, then call its setRGB() method to set the actual pixel contents from your previously prepared int array
I think the above is easiest if you know what you're doing with the bits and bytes. However, in principle, you should be able to do the following:
find a suitable WritableRaster.create... method method that will create a WritableRaster object wrapped around your data
pass that WritableRaster into the relevant BufferedImage constructor to create your image.
I'm trying to take a BufferedImage, apply a Fourier transform (using jtransforms), and write the data back to the BufferedImage. But I'm stuck creating a new Raster to set the results back, am I missing something here?
BufferedImage bitmap;
float [] bitfloat = null;
bitmap = ImageIO.read(new File("filename"));
FloatDCT_2D dct = new FloatDCT_2D(bitmap.getWidth(),bitmap.getHeight());
bitfloat = bitmap.getData().getPixels(0, 0, bitmap.getWidth(), bitmap.getHeight(), bitfloat);
dct.forward(bitfloat, false);
But I'm stumped trying to finish off this line, what should I give the createRaster function? The javadocs for createRaster make little sense to me:
bitmap.setData(Raster.createRaster(`arg1`, `arg2`, `arg3`));
I'm starting to wonder if a float array is even necessary, but there aren't many examples of jtransforms out there.
Don't create a new Raster. Use WritableRaster.setPixels(int,int,int,int,float[]) to write the array back to the image.
final int w = bitmap.getWidth();
final int h = bitmap.getHeight();
final WritableRaster wr = bitmap.getData();
bitfloat = wr.getPixels(0, 0, w, h, bitfloat);
// do processing here
wr.setPixels(0, 0, w, h, bitfloat);
Note also that if you're planning to display this image, you should really copy it to a screen-compatible type; ImageIO seldom returns those.
I'm doing Google searches for FloatDCT_2D to see what package/library it's in, and it looks like there are several references to various sources, such as "edu.emory.mathcs.jtransforms.dct.FloatDCT_2D". Without knowing what custom library you're using, it's really hard to give you any advice on how to perform the transform.
My guess is in general, that you should read the input data from the original raster, perform the transform on the original data, then write the output to a new raster.
However, your last statement all on it's own looks odd... Raster.createRaster() looks like you're calling a static method with no parameters on a class you've never referenced in the code you've posted. How is that generating data for your bitmap??? Even in pseudo code, you would need to take the results of your transform and build the resultant raster.