a list of images and stickers(webp format) must be shown on a recycleview.
to show sticker on imageView, this [repository] (https://github.com/EverythingMe/webp-android) is used. this repository was one of suggested
solution on this post(WebP for Android)
sticker file is readed from external storage, convert to byte array, by using library of the repository, byte array convert to bitmap, and finally bitmap is shown on imageView. below code convert sticker file to bitmap
private void ShowStickerOnImageView(String stickerPath){
File file = new File(stickerPath);
int size = (int) file.length();
byte[] bytes = new byte[size];
BufferedInputStream buf = new BufferedInputStream(new FileInputStream(file));
buf.read(bytes, 0, bytes.length);
buf.close();
Bitmap bitmap = null;
boolean NATIVE_WEB_P_SUPPORT = Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN_MR2;
if (!NATIVE_WEB_P_SUPPORT) {
bitmap = WebPDecoder.getInstance().decodeWebP(bytes);
} else {
bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
}
holder.imageView.setImageBitmap(bitmap);
}
.....
public Bitmap decodeWebP(byte[] encoded, int w, int h) {
int[] width = new int[]{w};
int[] height = new int[]{h};
byte[] decoded = decodeRGBAnative(encoded, encoded.length, width, height);
if (decoded.length == 0) return null;
int[] pixels = new int[decoded.length / 4];
ByteBuffer.wrap(decoded).asIntBuffer().get(pixels);
return Bitmap.createBitmap(pixels, width[0], height[0], Bitmap.Config.ARGB_8888);
}
when 'NATIVE_WEB_P_SUPPORT' is false, 'decodeWebP' method is called, this method work fine in most of the time, but sometimes 'out of memory' error is happened on this method. most of the time, this error is happened on these lines
int[] pixels = new int[decoded.length / 4];
ByteBuffer.wrap(decoded).asIntBuffer().get(pixels);
return Bitmap.createBitmap(pixels, width[0], height[0], Bitmap.Config.ARGB_8888);
i found that byte array length of sticker file is big , can i decrease sticker file size programmatically? i want to find solution, to decrease byte array size.
You are creating a Bitmap that is being used as native size, but applied to an ImageView. Decrease the Bitmap to the size of the View:
Bitmap yourThumbnail= Bitmap.createScaledBitmap(
theOriginalBitmap,
desiredWidth,
desiredHeight,
false
);
Do note that:
public static Bitmap createBitmap(int colors[], int width, int height, Config config) {
return createBitmap(null, colors, 0, width, width, height, config);
}
Will call
public static Bitmap createBitmap(DisplayMetrics display, int colors[],
int offset, int stride, int width, int height, Config config)
And that will lead to:
Bitmap bm = nativeCreate(colors, offset, stride, width, height,
config.nativeInt, false);
Basically, you cannot create a huge Bitmap in memory, for no reason. If this is for phones, assume a 20 MB size for application.
An 800*600*4 image, yelds 1920000 bytes. Lower Image quality, such as using RGB_565 (half byte ammount per pixel, compared with RGB_8888), or pre-re-scale your source Bit map.
Related
I am trying to use this new feature of CameraX Image Analysis (version 1.1.0-alpha08): using setOutputImageFormat(ImageAnalysis.OUTPUT_IMAGE_FORMAT_RGBA_8888), images sent to the analyzer will have RGBA format.
See this for reference: https://developer.android.com/reference/androidx/camera/core/ImageAnalysis#OUTPUT_IMAGE_FORMAT_RGBA_8888
I need to turn the image sent to the analyzer into a Bitmap so that I can input it to a TensorFlow classifier.
Without this new feature I would receive the image in the standard YUV_420_888 format then I would have to use one of the several solutions that can be googled in order to turn YUV_420_888 to RGBA then to Bitmap. Like this: https://blog.minhazav.dev/how-to-convert-yuv-420-sp-android.media.Image-to-Bitmap-or-jpeg/.
I assume getting the Media Image directly in RGBA format should help me avoid implementing those painfull solutions (that I have actually tried and do not seem to work very well for me so far).
Problem is I don't know how to turn this RGBA Media Image into a Bitmap. I have noticed that calling mediaImage.getFormat() returns 1 which is not an ImageFormat value but a PixelFormat one, the one logically corresponding to RGBA_8888 format, which is in line with the documentation: "All ImageProxy sent to ImageAnalysis.Analyzer.analyze(ImageProxy) will have format PixelFormat.RGBA_8888".
I have tried this:
private Bitmap toBitmapRGBA(Image image) {
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
buffer.rewind();
int size = buffer.remaining();
byte[] bytes = new byte[size];
buffer.get(bytes);
Bitmap bitmapImage = BitmapFactory.decodeByteArray(bytes, 0, bytes.length, null);
return bitmapImage;
}
This returns null indicating the decodeByteArray does not work. (I notice the image has got only one plane).
private Bitmap toBitmapRGBA2(Image image) {
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
Bitmap bitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
buffer.rewind();
bitmap.copyPixelsFromBuffer(buffer);
return bitmap;
}
This returns a Bitmap that looks noting but noise.
Please help!
Kind regards
Mickael
I actually found a solution myself, so I post it here if anyone is interested:
private Bitmap toBitmap(Image image) {
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
int pixelStride = planes[0].getPixelStride();
int rowStride = planes[0].getRowStride();
int rowPadding = rowStride - pixelStride * image.getWidth();
Bitmap bitmap = Bitmap.createBitmap(image.getWidth()+rowPadding/pixelStride,
image.getHeight(), Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(buffer);
return bitmap;
}
if you want to process the pixel array further on without creating a bitmap object you can do something like this:
val data = imageProxy.planes[0].buffer.toByteArray()
val pixels = IntArray(data.size / imageProxy.planes[0].pixelStride) {
var index = it * imageProxy.planes[0].pixelStride
(data[index++].toInt() and 0xff.shl(16)) or
(data[index++].toInt() and 0xff).shl(8) or
(data[index++].toInt() and 0xff).shl(0) or
(data[index].toInt() and 0xff).shl(24)
}
And then you can create bitmap this way:
Bitmap.createBitmap(
pixels,
0,
imageProxy.planes[0].rowStride / imageProxy.planes[0].pixelStride,
imageProxy.width,
imageProxy.height,
Bitmap.Config.ARGB_8888
)
So i'm using the legacy Camera API (as far as I can tell) to get previewFrame call backs to then run a few machine learning models I have. I have confirmed that the machine learning models work when given a bitmap decoded when I take a picture via the onPictureTaken callback. Right now in the samples below, I am just simply testing on ML Kit's barcode scanner as a base case, but my custom models seemed to work fine with the onPictureTaken callback as well.
From what i've gathered, using onPreviewFrame isn't necessarily the best way to do this, but for the sake of having a quick sample play-around (and learning experience) I decided to just go this route. Based on everything i've tried from others having solutions online, I can't seem to get anything to work properly. The below code returns null:
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
// Log.d("onPreviewFrame bytes.length", String.valueOf(bytes.length));
// final Bitmap bmp = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
// Log.d("onPreviewFrame bmp.getHeight()", String.valueOf(bmp.getHeight()));
Camera.Parameters parameters = camera.getParameters();
int width = parameters.getPreviewSize().width;
int height = parameters.getPreviewSize().height;
Log.d("onPreviewFrame - width", String.valueOf(width));
Log.d("onPreviewFrame - height", String.valueOf(height));
Log.d("onPreviewFrame - parameters.getPreviewFormat()", String.valueOf(parameters.getPreviewFormat()));
YuvImage yuv = new YuvImage(data, parameters.getPreviewFormat(), width, height, null);
ByteArrayOutputStream out = new ByteArrayOutputStream();
yuv.compressToJpeg(new Rect(0, 0, width, height), 100, out);
//
// byte[] bytes = out.toByteArray();
// final Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
byte[] bytes = yuv.getYuvData();
final Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
extractBarcode(FirebaseVisionImage.fromBitmap(bitmap), bitmap);
}
Here's something else I tried:
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
// Log.d("onPreviewFrame bytes.length", String.valueOf(bytes.length));
// final Bitmap bmp = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
// Log.d("onPreviewFrame bmp.getHeight()", String.valueOf(bmp.getHeight()));
Camera.Parameters parameters = camera.getParameters();
int width = parameters.getPreviewSize().width;
int height = parameters.getPreviewSize().height;
Log.d("onPreviewFrame - width", String.valueOf(width));
Log.d("onPreviewFrame - height", String.valueOf(height));
YuvImage yuv = new YuvImage(data, parameters.getPreviewFormat(), width, height, null);
ByteArrayOutputStream out = new ByteArrayOutputStream();
yuv.compressToJpeg(new Rect(0, 0, width, height), 100, out);
byte[] bytes = out.toByteArray();
final Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
extractBarcode(FirebaseVisionImage.fromBitmap(bitmap), bitmap);
}
Unfortunately I got this error:
ML Kit has detected that you seem to pass camera frames to the detector as a Bitmap object. This is inefficient. Please use YUV_420_888 format for camera2 API or NV21 format for (legacy) camera API and directly pass down the byte array to ML Kit.
with parameters.getPreviewFormat() returning 17 which is NV21. I also tried simply by changing that to ImageFormat.YUV_420_888 but that resulted in the below illegal argument exception:
only support ImageFormat.NV21 and ImageFormat.YUY2 for now
Instead of using the Camera API, try using CameraX. It's easy to use and you can execute your code whenever a frame is received from the camera. While trying to integrate an ML model with the camera, I faced a similar error and then turned to CameraX.
Basically, we'll create an ImageAnalysis.Analyser class through which we would get the Image object ( frames ). Using an extension function, we will convert this Image object to a YuvImage.
You can follow this codelab to use CameraX to analyze frames. You will create a class that extends ImageAnalysis.Analyser class.
class FrameAnalyser() : ImageAnalysis.Analyzer {
override fun analyze(image: ImageProxy?, rotationDegrees: Int) {
val yuvImage = image?.image?.toYuv() // The extension function
}
}
Create the extension function which transforms the Image to a YuvImage.
private fun Image.toYuv(): YuvImage {
val yBuffer = planes[0].buffer
val uBuffer = planes[1].buffer
val vBuffer = planes[2].buffer
val ySize = yBuffer.remaining()
val uSize = uBuffer.remaining()
val vSize = vBuffer.remaining()
val nv21 = ByteArray(ySize + uSize + vSize)
yBuffer.get(nv21, 0, ySize)
vBuffer.get(nv21, ySize, vSize)
uBuffer.get(nv21, ySize + vSize, uSize)
val yuvImage = YuvImage(nv21, ImageFormat.NV21, this.width, this.height, null)
return yuvImage
}
You can change the YUV Image format as required. Refer to these docs.
Insted of directly passing FirebaseVisionImage
extractBarcode(FirebaseVisionImage.fromBitmap(bitmap), bitmap);
you can do it like this
var bitmap = toARGBBitmap(ocrBitmap)
extractBarcode(FirebaseVisionImage.fromBitmap(bitmap), bitmap);
private fun toARGBBitmap(img: Bitmap): Bitmap {
return img.copy(Bitmap.Config.ARGB_8888, true)
}
You can try this:)
I want to take only a part of the the screen data from a preview video callback to reduce the time of the process. The probleme is I only know how to take the whole screen with OnPreviewFrame:
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
myData = data;
// +get camera resolution x, y
}
And then with this data get the image :
private Bitmap getBitmapFromYUV(byte[] data, int width, int height)
{
ByteArrayOutputStream out = new ByteArrayOutputStream();
YuvImage yuvImage = new YuvImage(data, ImageFormat.NV21, width, height, null);
yuvImage.compressToJpeg(new Rect(0, 0, width, height), 100, out);
byte[] imageBytes = out.toByteArray();
Bitmap image = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
return image;
}
And then I take the part of the image taken I want :
cutImage = Bitmap.createBitmap(image, xOffset, yOffset, customWidth, customHeight);
The problem is that I need to take lots of images to apply some image processing on it and that's why I want to reduce the time it takes to get the images. Instead of taking the whole screen and then crop it, I want to immediatly get the cropped image. Is there a way to get the part of the screen data ?
Ok I finally found something, I still record all the data of the camera but when using compressToJpeg I crop the picture with a custom Rect. Maybe there is something better to do before this but this is still a good improvement. Here are my changes :
yuvImage.compressToJpeg(new Rect(offsetX, offsetY, sizeCaptureX + offsetX, sizeCaptureY + offsetY ), 100, out);
I have an image which is stored as a byte[] array, and I want to flip the image before I send it off to be processed elsewhere (as a byte[] array).
I've searched around and can't find a simple solution without manipulating each bit in the byte[] array.
What about converting the byte array[] to an image type of some sort, flipping that using an existing flip method, and then converting that back to a byte[] array?
Any advice?
Cheers!
Byte array to bitmap:
Bitmap bmp = BitmapFactory.decodeByteArray(byteArray, 0, byteArray.length);
Use this to rotate the image by providing the right angle (180):
public Bitmap rotateImage(int angle, Bitmap bitmapSrc) {
Matrix matrix = new Matrix();
matrix.postRotate(angle);
return Bitmap.createBitmap(bitmapSrc, 0, 0,
bitmapSrc.getWidth(), bitmapSrc.getHeight(), matrix, true);
}
Then back to the array:
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bmp.compress(Bitmap.CompressFormat.PNG, 100, stream);
byte[] flippedImageByteArray = stream.toByteArray();
The following is a method used to flip the image which is stored as byte array and return the result in byte array.
private byte[] flipImage(byte[] data, int flip) {
Bitmap bmp = BitmapFactory.decodeByteArray(data, 0, data.length);
Matrix matrix = new Matrix();
switch (flip){
case 1: matrix.preScale(1.0f, -1.0f); break; //flip vertical
case 2: matrix.preScale(-1.0f, 1.0f); break; //flip horizontal
default: matrix.preScale(1.0f, 1.0f); //No flip
}
Bitmap bmp2 = Bitmap.createBitmap(bmp, 0, 0, bmp.getWidth(), bmp.getHeight(), matrix, true);
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bmp2.compress(Bitmap.CompressFormat.JPEG, 100, stream);
return stream.toByteArray();
}
If you want a vertical flipped image then pass 1 as flip value and for horizontal flip pass 2.
For Eg:
#Override
public void onPictureTaken(byte[] data, Camera camera) {
byte[] verticalFlippedImage = flipImage(data,1);
byte[] horizontalFlippedImage = flipImage(data,2);
}
When I am using copyPixelsFromBuffer and copyPixelsToBuffer, the bitmap is not displaying as the same one, I have tried below code:
Bitmap bm = BitmapFactory.decodeByteArray(a, 0, a.length);
int[] pixels = new int[bm.getWidth() * bm.getHeight()];
bm.getPixels(pixels, 0, bm.getWidth(), 0, 0,bm.getWidth(),bm.getHeight());
ByteBuffer buffer = ByteBuffer.allocate(bm.getRowBytes()*bm.getHeight());
bm.copyPixelsToBuffer(buffer);//I copy the pixels from Bitmap bm to the buffer
ByteBuffer buffer1 = ByteBuffer.wrap(buffer.array());
newbm = Bitmap.createBitmap(160, 160,Config.RGB_565);
newbm.copyPixelsFromBuffer(buffer1);//I read pixels from the Buffer and put the pixels to the Bitmap newbm.
imageview1.setImageBitmap(newbm);
imageview2.setImageBitmap(bm);
Why the Bitmap bm and newbm did not display the same content?
In your code, you are copying the pixels into a bitmap with RGB_565 format, whereas the original bitmap from which you got the pixels must be in a different format.
The problem is clear from the documentation of copyPixelsFromBuffer():
The data in the buffer is not changed in any way (unlike setPixels(),
which converts from unpremultipled 32bit to whatever the bitmap's
native format is.
So either use the same bitmap format, or use setPixels() or draw the original bitmap onto the new one using a Canvas.drawBitmap() call.
Also use bm.getWidth() & bm.getHeight() to specify the size of the new bitmap instead of hard-coding as 160.