When I am using copyPixelsFromBuffer and copyPixelsToBuffer, the bitmap is not displaying as the same one, I have tried below code:
Bitmap bm = BitmapFactory.decodeByteArray(a, 0, a.length);
int[] pixels = new int[bm.getWidth() * bm.getHeight()];
bm.getPixels(pixels, 0, bm.getWidth(), 0, 0,bm.getWidth(),bm.getHeight());
ByteBuffer buffer = ByteBuffer.allocate(bm.getRowBytes()*bm.getHeight());
bm.copyPixelsToBuffer(buffer);//I copy the pixels from Bitmap bm to the buffer
ByteBuffer buffer1 = ByteBuffer.wrap(buffer.array());
newbm = Bitmap.createBitmap(160, 160,Config.RGB_565);
newbm.copyPixelsFromBuffer(buffer1);//I read pixels from the Buffer and put the pixels to the Bitmap newbm.
imageview1.setImageBitmap(newbm);
imageview2.setImageBitmap(bm);
Why the Bitmap bm and newbm did not display the same content?
In your code, you are copying the pixels into a bitmap with RGB_565 format, whereas the original bitmap from which you got the pixels must be in a different format.
The problem is clear from the documentation of copyPixelsFromBuffer():
The data in the buffer is not changed in any way (unlike setPixels(),
which converts from unpremultipled 32bit to whatever the bitmap's
native format is.
So either use the same bitmap format, or use setPixels() or draw the original bitmap onto the new one using a Canvas.drawBitmap() call.
Also use bm.getWidth() & bm.getHeight() to specify the size of the new bitmap instead of hard-coding as 160.
Related
I am trying to use this new feature of CameraX Image Analysis (version 1.1.0-alpha08): using setOutputImageFormat(ImageAnalysis.OUTPUT_IMAGE_FORMAT_RGBA_8888), images sent to the analyzer will have RGBA format.
See this for reference: https://developer.android.com/reference/androidx/camera/core/ImageAnalysis#OUTPUT_IMAGE_FORMAT_RGBA_8888
I need to turn the image sent to the analyzer into a Bitmap so that I can input it to a TensorFlow classifier.
Without this new feature I would receive the image in the standard YUV_420_888 format then I would have to use one of the several solutions that can be googled in order to turn YUV_420_888 to RGBA then to Bitmap. Like this: https://blog.minhazav.dev/how-to-convert-yuv-420-sp-android.media.Image-to-Bitmap-or-jpeg/.
I assume getting the Media Image directly in RGBA format should help me avoid implementing those painfull solutions (that I have actually tried and do not seem to work very well for me so far).
Problem is I don't know how to turn this RGBA Media Image into a Bitmap. I have noticed that calling mediaImage.getFormat() returns 1 which is not an ImageFormat value but a PixelFormat one, the one logically corresponding to RGBA_8888 format, which is in line with the documentation: "All ImageProxy sent to ImageAnalysis.Analyzer.analyze(ImageProxy) will have format PixelFormat.RGBA_8888".
I have tried this:
private Bitmap toBitmapRGBA(Image image) {
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
buffer.rewind();
int size = buffer.remaining();
byte[] bytes = new byte[size];
buffer.get(bytes);
Bitmap bitmapImage = BitmapFactory.decodeByteArray(bytes, 0, bytes.length, null);
return bitmapImage;
}
This returns null indicating the decodeByteArray does not work. (I notice the image has got only one plane).
private Bitmap toBitmapRGBA2(Image image) {
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
Bitmap bitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
buffer.rewind();
bitmap.copyPixelsFromBuffer(buffer);
return bitmap;
}
This returns a Bitmap that looks noting but noise.
Please help!
Kind regards
Mickael
I actually found a solution myself, so I post it here if anyone is interested:
private Bitmap toBitmap(Image image) {
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
int pixelStride = planes[0].getPixelStride();
int rowStride = planes[0].getRowStride();
int rowPadding = rowStride - pixelStride * image.getWidth();
Bitmap bitmap = Bitmap.createBitmap(image.getWidth()+rowPadding/pixelStride,
image.getHeight(), Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(buffer);
return bitmap;
}
if you want to process the pixel array further on without creating a bitmap object you can do something like this:
val data = imageProxy.planes[0].buffer.toByteArray()
val pixels = IntArray(data.size / imageProxy.planes[0].pixelStride) {
var index = it * imageProxy.planes[0].pixelStride
(data[index++].toInt() and 0xff.shl(16)) or
(data[index++].toInt() and 0xff).shl(8) or
(data[index++].toInt() and 0xff).shl(0) or
(data[index].toInt() and 0xff).shl(24)
}
And then you can create bitmap this way:
Bitmap.createBitmap(
pixels,
0,
imageProxy.planes[0].rowStride / imageProxy.planes[0].pixelStride,
imageProxy.width,
imageProxy.height,
Bitmap.Config.ARGB_8888
)
I'm in the draw loop of an android view:
Bitmap bitmap = Bitmap.createBitmap(this.getWidth(),
this.getHeight(), Bitmap.Config.ARGB_4444);
Canvas newCanvas = new Canvas(bitmap);
super.draw(newCanvas);
Log.d("AndroidUnity","Canvas Drawn!");
mImageView.setImageBitmap(bitmap);
And the above code shows me the correct drawing on the attached Image Viewer.
When I convert the bitmap to a byte array:
ByteBuffer byteBuffer = ByteBuffer.allocate(bitmap.getByteCount());
bitmap.copyPixelsToBuffer(byteBuffer);
byte[] bytes = byteBuffer.array();
importing the bytes into Unity does not work (shows a black image on my rawimage):
imageTexture2D = new Texture2D(width, height, TextureFormat.ARGB4444, false);
imageTexture2D.LoadRawTextureData(bytes);
imageTexture2D.Apply();
RawImage.texture = imageTexture2D;
Any ideas on how to get the Java bytes[] to display as a texture/image in Unity? I've tested that the bytes are sending correctly, i.e. when I push a byte array of {1,2,3,4} from android, I get {1,2,3,4} on the unity side.
this isn't mentioning that Unity throws an error when trying to transfer the bytes as a byte[], so instead I have to follow this advice, on the C# side:
void ReceieveAndroidBytes(AndroidJavaObject jo){
AndroidJavaObject bufferObject = jo.Get<AndroidJavaObject>("Buffer");
byte[] bytes = AndroidJNIHelper.ConvertFromJNIArray<byte[]>(bufferObject.GetRawObject()); }
and a trivial byte[] container class "Buffer" on the java side
I was trying to do the exact same thing and my initial attempts also had a black texture. I do the array conversion with AndroidJNIHelper.ConvertFromJNIArray like you do except I used sbyte[] instead of byte[]. To set the actual image data I ended up using
imageTexture2D.SetPixelData(bytes, 0);
If I'm not mistaken LoadRawTextureData is even rawer than an array of pixel data, it might be how graphics cards store textures with compression. If that is true then raw pixel data isn't in the right format and it can't be decoded.
I'm sorry that I'm not good at English.
I want to make a real-time image manipulation app.(Binarization, color inverse...etc.)
It needs speed, so I want to manipulate it as byte[], without converting to Image or Bitmap.
The format of image that I got from camera was YUV(NV21), but because I don't know about this format, I convert it to JPEG.
but it also doesn't work as I expected.(I thought it would be one byte per one pixel or three byte per one pixel.)
So,
How can I do such manipulation(binarization, color inverse) as JPEG byte array?
or How can I convert NV21 format byte array to RGB byte array?
and I used the method to convert NV21 to JPEG.
YuvImage yuvimage = new YuvImage(bytes, ImageFormat.NV21, width, height, null);
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
yuvimage.compressToJpeg(new Rect(0, 0, width, height), 100, outputStream);
and I got yuv image byte array from onPreviewFrame(Camra.PreviewCallback).
I think you can use setPreviewFormat(int) to set other than the default NV21 format for camera captures.
https://developer.android.com/reference/android/graphics/ImageFormat.html#NV21
I have a camera sending frames to a SurfaceView. I would like to get these frames out of the surface view and send them elsewhere. In their final form, the images must be in JPEG format. To accomplish this currently, I am creating a YUV image from the byte[] and then calling compressToJpeg. However, when I invoke compressToJpeg on every frame rather than doing nothing but displaying it, my FPS goes from ~30 to ~4. I commented out the other lines and this function appears to be the culprit.
public void onNewRawImage(byte[] data, Size size) {
// Convert to JPG
YuvImage yuvimage=new YuvImage(data,
ImageFormat.NV21, size.width, size.height, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvimage.compressToJpeg(new Rect(0, 0, yuvimage.getWidth(),
yuvimage.getHeight()), 80, baos);
byte[] jdata = baos.toByteArray();
// Convert to Bitmap
Bitmap bitmap = BitmapFactory.decodeByteArray(jdata, 0, jdata.length);
}
Is it possible to start in the JPEG format rather than having to convert to it? I am hoping I am making a mistake somewhere. Any help is greatly appreciated, thank you.
I am creating an application over Android where I need to manipulate my JPG files. I am not getting much of header information for JPG format so for that I am converting it to Bitmap, manipulated the pixel values in bitmap and then again convert it back to JPG.
Here what problem I am facing is- after manipulating only some pixels of bitmap and
converting it back to JPG, I do not get the same set of pixels I got earlier (for those pixels which I did not manipulate). I am getting the same image as the original in the new image. But when I check new image pixels values for decoding, the untouched pixels are different...
File imagefile = new File(filepath);
FileInputStream fis = new FileInputStream(imagefile);
Bitmap bi = BitmapFactory.decodeStream(fis);
int intArray[];
bi=bi.copy(Bitmap.Config.ARGB_8888,true);
intArray = new int[bi.getWidth()*bi.getHeight()];
bi.getPixels(intArray, 0, bi.getWidth(), 0, 0, bi.getWidth(), bi.getHeight());
int newArray[] = encodeImage(msgbytes,intArray,mbytes); // method where i am manipulating my pixel values
// converting the bitmap data back to JPG file
bi = Bitmap.createBitmap(newArray, bi.getWidth(), bi.getHeight(), Bitmap.Config.ARGB_8888);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
bi.compress(Bitmap.CompressFormat.JPEG, 100, baos);
byte[] data = baos.toByteArray();
Bitmap bitmapimage = BitmapFactory.decodeByteArray(data, 0, data.length);
String filepath = "/sdcard/image/new2.jpg";
File imagefile = new File(filepath);
FileOutputStream fos = new FileOutputStream(imagefile);
bitmapimage.compress(CompressFormat.JPEG, 100, fos);
Help me if I am wrong somewhere or whether I should use some other method to manipulate JPG pixel values...
JPEG is an image format that is usually based on lossy compression. That means that some information that is not important for the human eye is thrown away to further shrink the file size. Try to save your image as a PNG (a lossless format).
Be careful with using
Bitmap bi = BitmapFactory.decodeStream(fis);
bi = bi.copy(Bitmap.Config.ARGB_8888, true);
At the point where you have the first bi you may have already lost a lot of information, instead try using BitmapFactory.Options to force 8888 (which is the default too):
BitmapFactory.Options options = new BitmapFactory.Options();
options.inPreferredConfig = Bitmap.Config.ARGB_8888;
options.inDither = false;
Bitmap bi = BitmapFactory.decodeStream(fis, options);
If you stay with copy you should still recycle() the one that you throw away.