So i'm using the legacy Camera API (as far as I can tell) to get previewFrame call backs to then run a few machine learning models I have. I have confirmed that the machine learning models work when given a bitmap decoded when I take a picture via the onPictureTaken callback. Right now in the samples below, I am just simply testing on ML Kit's barcode scanner as a base case, but my custom models seemed to work fine with the onPictureTaken callback as well.
From what i've gathered, using onPreviewFrame isn't necessarily the best way to do this, but for the sake of having a quick sample play-around (and learning experience) I decided to just go this route. Based on everything i've tried from others having solutions online, I can't seem to get anything to work properly. The below code returns null:
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
// Log.d("onPreviewFrame bytes.length", String.valueOf(bytes.length));
// final Bitmap bmp = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
// Log.d("onPreviewFrame bmp.getHeight()", String.valueOf(bmp.getHeight()));
Camera.Parameters parameters = camera.getParameters();
int width = parameters.getPreviewSize().width;
int height = parameters.getPreviewSize().height;
Log.d("onPreviewFrame - width", String.valueOf(width));
Log.d("onPreviewFrame - height", String.valueOf(height));
Log.d("onPreviewFrame - parameters.getPreviewFormat()", String.valueOf(parameters.getPreviewFormat()));
YuvImage yuv = new YuvImage(data, parameters.getPreviewFormat(), width, height, null);
ByteArrayOutputStream out = new ByteArrayOutputStream();
yuv.compressToJpeg(new Rect(0, 0, width, height), 100, out);
//
// byte[] bytes = out.toByteArray();
// final Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
byte[] bytes = yuv.getYuvData();
final Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
extractBarcode(FirebaseVisionImage.fromBitmap(bitmap), bitmap);
}
Here's something else I tried:
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
// Log.d("onPreviewFrame bytes.length", String.valueOf(bytes.length));
// final Bitmap bmp = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
// Log.d("onPreviewFrame bmp.getHeight()", String.valueOf(bmp.getHeight()));
Camera.Parameters parameters = camera.getParameters();
int width = parameters.getPreviewSize().width;
int height = parameters.getPreviewSize().height;
Log.d("onPreviewFrame - width", String.valueOf(width));
Log.d("onPreviewFrame - height", String.valueOf(height));
YuvImage yuv = new YuvImage(data, parameters.getPreviewFormat(), width, height, null);
ByteArrayOutputStream out = new ByteArrayOutputStream();
yuv.compressToJpeg(new Rect(0, 0, width, height), 100, out);
byte[] bytes = out.toByteArray();
final Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
extractBarcode(FirebaseVisionImage.fromBitmap(bitmap), bitmap);
}
Unfortunately I got this error:
ML Kit has detected that you seem to pass camera frames to the detector as a Bitmap object. This is inefficient. Please use YUV_420_888 format for camera2 API or NV21 format for (legacy) camera API and directly pass down the byte array to ML Kit.
with parameters.getPreviewFormat() returning 17 which is NV21. I also tried simply by changing that to ImageFormat.YUV_420_888 but that resulted in the below illegal argument exception:
only support ImageFormat.NV21 and ImageFormat.YUY2 for now
Instead of using the Camera API, try using CameraX. It's easy to use and you can execute your code whenever a frame is received from the camera. While trying to integrate an ML model with the camera, I faced a similar error and then turned to CameraX.
Basically, we'll create an ImageAnalysis.Analyser class through which we would get the Image object ( frames ). Using an extension function, we will convert this Image object to a YuvImage.
You can follow this codelab to use CameraX to analyze frames. You will create a class that extends ImageAnalysis.Analyser class.
class FrameAnalyser() : ImageAnalysis.Analyzer {
override fun analyze(image: ImageProxy?, rotationDegrees: Int) {
val yuvImage = image?.image?.toYuv() // The extension function
}
}
Create the extension function which transforms the Image to a YuvImage.
private fun Image.toYuv(): YuvImage {
val yBuffer = planes[0].buffer
val uBuffer = planes[1].buffer
val vBuffer = planes[2].buffer
val ySize = yBuffer.remaining()
val uSize = uBuffer.remaining()
val vSize = vBuffer.remaining()
val nv21 = ByteArray(ySize + uSize + vSize)
yBuffer.get(nv21, 0, ySize)
vBuffer.get(nv21, ySize, vSize)
uBuffer.get(nv21, ySize + vSize, uSize)
val yuvImage = YuvImage(nv21, ImageFormat.NV21, this.width, this.height, null)
return yuvImage
}
You can change the YUV Image format as required. Refer to these docs.
Insted of directly passing FirebaseVisionImage
extractBarcode(FirebaseVisionImage.fromBitmap(bitmap), bitmap);
you can do it like this
var bitmap = toARGBBitmap(ocrBitmap)
extractBarcode(FirebaseVisionImage.fromBitmap(bitmap), bitmap);
private fun toARGBBitmap(img: Bitmap): Bitmap {
return img.copy(Bitmap.Config.ARGB_8888, true)
}
You can try this:)
Related
I am trying to use this new feature of CameraX Image Analysis (version 1.1.0-alpha08): using setOutputImageFormat(ImageAnalysis.OUTPUT_IMAGE_FORMAT_RGBA_8888), images sent to the analyzer will have RGBA format.
See this for reference: https://developer.android.com/reference/androidx/camera/core/ImageAnalysis#OUTPUT_IMAGE_FORMAT_RGBA_8888
I need to turn the image sent to the analyzer into a Bitmap so that I can input it to a TensorFlow classifier.
Without this new feature I would receive the image in the standard YUV_420_888 format then I would have to use one of the several solutions that can be googled in order to turn YUV_420_888 to RGBA then to Bitmap. Like this: https://blog.minhazav.dev/how-to-convert-yuv-420-sp-android.media.Image-to-Bitmap-or-jpeg/.
I assume getting the Media Image directly in RGBA format should help me avoid implementing those painfull solutions (that I have actually tried and do not seem to work very well for me so far).
Problem is I don't know how to turn this RGBA Media Image into a Bitmap. I have noticed that calling mediaImage.getFormat() returns 1 which is not an ImageFormat value but a PixelFormat one, the one logically corresponding to RGBA_8888 format, which is in line with the documentation: "All ImageProxy sent to ImageAnalysis.Analyzer.analyze(ImageProxy) will have format PixelFormat.RGBA_8888".
I have tried this:
private Bitmap toBitmapRGBA(Image image) {
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
buffer.rewind();
int size = buffer.remaining();
byte[] bytes = new byte[size];
buffer.get(bytes);
Bitmap bitmapImage = BitmapFactory.decodeByteArray(bytes, 0, bytes.length, null);
return bitmapImage;
}
This returns null indicating the decodeByteArray does not work. (I notice the image has got only one plane).
private Bitmap toBitmapRGBA2(Image image) {
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
Bitmap bitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
buffer.rewind();
bitmap.copyPixelsFromBuffer(buffer);
return bitmap;
}
This returns a Bitmap that looks noting but noise.
Please help!
Kind regards
Mickael
I actually found a solution myself, so I post it here if anyone is interested:
private Bitmap toBitmap(Image image) {
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
int pixelStride = planes[0].getPixelStride();
int rowStride = planes[0].getRowStride();
int rowPadding = rowStride - pixelStride * image.getWidth();
Bitmap bitmap = Bitmap.createBitmap(image.getWidth()+rowPadding/pixelStride,
image.getHeight(), Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(buffer);
return bitmap;
}
if you want to process the pixel array further on without creating a bitmap object you can do something like this:
val data = imageProxy.planes[0].buffer.toByteArray()
val pixels = IntArray(data.size / imageProxy.planes[0].pixelStride) {
var index = it * imageProxy.planes[0].pixelStride
(data[index++].toInt() and 0xff.shl(16)) or
(data[index++].toInt() and 0xff).shl(8) or
(data[index++].toInt() and 0xff).shl(0) or
(data[index].toInt() and 0xff).shl(24)
}
And then you can create bitmap this way:
Bitmap.createBitmap(
pixels,
0,
imageProxy.planes[0].rowStride / imageProxy.planes[0].pixelStride,
imageProxy.width,
imageProxy.height,
Bitmap.Config.ARGB_8888
)
I'm trying to use Google Mobile Vision API with the camera2 module and I'm having a lot of trouble.
I'm using Google's android-Camera2Video example code as a base. I've modified it to include the following callback:
Camera2VideoFragment.java
OnCameraImageAvailable mCameraImageCallback;
public interface OnCameraImageAvailable {
void onCameraImageAvailable(Image image);
}
ImageReader.OnImageAvailableListener mImageAvailable = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Image image = reader.acquireLatestImage();
if (image == null)
return;
mCameraImageCallback.onCameraImageAvailable(image);
image.close();
}
};
That way any fragment including Camera2VideoFragment.java can get access to its images.
Now, The Barcode API only accepts Bitmap images, but I'm unable to convert YUV_420_888 to Bitmap. Instead, I changed the imageReader's file format to JPEG and ran the following conversion code:
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
buffer.rewind();
byte[] data = new byte[buffer.capacity()];
buffer.get(data);
Bitmap bitmap = BitmapFactory.decodeByteArray(data, 0, data.length);
This worked but the framerate drop of feeding JPEG data to the imageReader was significant. I'm wondering if anyone has worked around this issue before.
A late answer but hopefully still helpful.
As Ezequiel Adrian on his Example has explained the conversion of YUV_420_888 into one of the supported formats (In his case NV21), you can do the similar thing to get your Bitmap output:
private byte[] convertYUV420888ToNV21(Image imgYUV420) {
// Converting YUV_420_888 data to YUV_420_SP (NV21).
byte[] data;
ByteBuffer buffer0 = imgYUV420.getPlanes()[0].getBuffer();
ByteBuffer buffer2 = imgYUV420.getPlanes()[2].getBuffer();
int buffer0_size = buffer0.remaining();
int buffer2_size = buffer2.remaining();
data = new byte[buffer0_size + buffer2_size];
buffer0.get(data, 0, buffer0_size);
buffer2.get(data, buffer0_size, buffer2_size);
return data;}
Then you can convert the result into Bitmap:
Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
a list of images and stickers(webp format) must be shown on a recycleview.
to show sticker on imageView, this [repository] (https://github.com/EverythingMe/webp-android) is used. this repository was one of suggested
solution on this post(WebP for Android)
sticker file is readed from external storage, convert to byte array, by using library of the repository, byte array convert to bitmap, and finally bitmap is shown on imageView. below code convert sticker file to bitmap
private void ShowStickerOnImageView(String stickerPath){
File file = new File(stickerPath);
int size = (int) file.length();
byte[] bytes = new byte[size];
BufferedInputStream buf = new BufferedInputStream(new FileInputStream(file));
buf.read(bytes, 0, bytes.length);
buf.close();
Bitmap bitmap = null;
boolean NATIVE_WEB_P_SUPPORT = Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN_MR2;
if (!NATIVE_WEB_P_SUPPORT) {
bitmap = WebPDecoder.getInstance().decodeWebP(bytes);
} else {
bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
}
holder.imageView.setImageBitmap(bitmap);
}
.....
public Bitmap decodeWebP(byte[] encoded, int w, int h) {
int[] width = new int[]{w};
int[] height = new int[]{h};
byte[] decoded = decodeRGBAnative(encoded, encoded.length, width, height);
if (decoded.length == 0) return null;
int[] pixels = new int[decoded.length / 4];
ByteBuffer.wrap(decoded).asIntBuffer().get(pixels);
return Bitmap.createBitmap(pixels, width[0], height[0], Bitmap.Config.ARGB_8888);
}
when 'NATIVE_WEB_P_SUPPORT' is false, 'decodeWebP' method is called, this method work fine in most of the time, but sometimes 'out of memory' error is happened on this method. most of the time, this error is happened on these lines
int[] pixels = new int[decoded.length / 4];
ByteBuffer.wrap(decoded).asIntBuffer().get(pixels);
return Bitmap.createBitmap(pixels, width[0], height[0], Bitmap.Config.ARGB_8888);
i found that byte array length of sticker file is big , can i decrease sticker file size programmatically? i want to find solution, to decrease byte array size.
You are creating a Bitmap that is being used as native size, but applied to an ImageView. Decrease the Bitmap to the size of the View:
Bitmap yourThumbnail= Bitmap.createScaledBitmap(
theOriginalBitmap,
desiredWidth,
desiredHeight,
false
);
Do note that:
public static Bitmap createBitmap(int colors[], int width, int height, Config config) {
return createBitmap(null, colors, 0, width, width, height, config);
}
Will call
public static Bitmap createBitmap(DisplayMetrics display, int colors[],
int offset, int stride, int width, int height, Config config)
And that will lead to:
Bitmap bm = nativeCreate(colors, offset, stride, width, height,
config.nativeInt, false);
Basically, you cannot create a huge Bitmap in memory, for no reason. If this is for phones, assume a 20 MB size for application.
An 800*600*4 image, yelds 1920000 bytes. Lower Image quality, such as using RGB_565 (half byte ammount per pixel, compared with RGB_8888), or pre-re-scale your source Bit map.
I want to take only a part of the the screen data from a preview video callback to reduce the time of the process. The probleme is I only know how to take the whole screen with OnPreviewFrame:
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
myData = data;
// +get camera resolution x, y
}
And then with this data get the image :
private Bitmap getBitmapFromYUV(byte[] data, int width, int height)
{
ByteArrayOutputStream out = new ByteArrayOutputStream();
YuvImage yuvImage = new YuvImage(data, ImageFormat.NV21, width, height, null);
yuvImage.compressToJpeg(new Rect(0, 0, width, height), 100, out);
byte[] imageBytes = out.toByteArray();
Bitmap image = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
return image;
}
And then I take the part of the image taken I want :
cutImage = Bitmap.createBitmap(image, xOffset, yOffset, customWidth, customHeight);
The problem is that I need to take lots of images to apply some image processing on it and that's why I want to reduce the time it takes to get the images. Instead of taking the whole screen and then crop it, I want to immediatly get the cropped image. Is there a way to get the part of the screen data ?
Ok I finally found something, I still record all the data of the camera but when using compressToJpeg I crop the picture with a custom Rect. Maybe there is something better to do before this but this is still a good improvement. Here are my changes :
yuvImage.compressToJpeg(new Rect(offsetX, offsetY, sizeCaptureX + offsetX, sizeCaptureY + offsetY ), 100, out);
I have quite the annoying problem. I'm building an app where one can share photos. On the SurfaceView where you take the actual photo, the resolution is great. However, when I retrieve that image and display it in a ListView using Picasso, the resolution goes to crap. The pixelation is real. Is there anything that I'm doing horrendously wrong to cause this? The first code snippet below is where I actually save the photo, and the one below that is my getItemView() method in my adapter for the listview. Thanks in advance.
Note that the "photo" variable you see in my code is a Parse subclass I've created to make it easier working with data associated with each photo. I think you can safely ignore it.
EDIT:
SurfaceView of Camera:
Note that I attempt to set the camera parameters to the highest quality allowed. Unfortunately, when I LOG size.width and size.height, I can only get around 176x144. Is there a way to get a higher resolution for supported camera sizes itself?
camera.setDisplayOrientation(90);
Parameters parameters = camera.getParameters();
parameters.set("jpeg-quality", 70);
parameters.setPictureFormat(ImageFormat.JPEG);
List<Camera.Size> sizes = parameters.getSupportedPictureSizes();
Size size = sizes.get(Integer.valueOf((sizes.size()-1)));
parameters.setPictureSize(size.width, size.height);
camera.setParameters(parameters);
camera.setDisplayOrientation(90);
List<Size> sizes2 = parameters.getSupportedPreviewSizes();
Size size2 = sizes.get(0);
parameters.setPreviewSize(size2.width, size2.height);
camera.setPreviewDisplay(holder);
camera.startPreview();
Saving the photo:
// Freeze camera
camera.stopPreview();
// Resize photo
Bitmap mealImage = BitmapFactory.decodeByteArray(data, 0, data.length);
Bitmap mealImageScaled = Bitmap.createScaledBitmap(mealImage, 640, 640, false);
// Override Android default landscape orientation and save portrait
Matrix matrix = new Matrix();
matrix.postRotate(90);
Bitmap rotatedScaledMealImage = Bitmap.createBitmap(mealImageScaled, 0,
0, mealImageScaled.getWidth(), mealImageScaled.getHeight(),
matrix, true);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
rotatedScaledMealImage.compress(Bitmap.CompressFormat.JPEG, 100, bos);
byte[] scaledData = bos.toByteArray();
// Save the scaled image to Parse with the date and time as its file name.
DateTime currentTime = new DateTime();
DateTimeFormatter fmt = DateTimeFormat.forPattern("HH MM SS");
photoFile = new ParseFile(currentTime.toString(fmt), scaledData);
photo.setPhotoFile(photoFile);
Displaying it:
final ParseImageView photoView = holder.photoView;
ParseFile photoFile = photo.getParseFile("photo");
Picasso.with(getContext())
.load(photoFile.getUrl())
.into(photoView, new Callback() {
#Override
public void onError() {
}
#Override
public void onSuccess() {
}
});
The problem is not with the Picasso
It because this line of code
parameters.set("jpeg-quality", 70);
and this
List<Size> sizes2 = parameters.getSupportedPreviewSizes();
Size size2 = sizes.get(0);
When you setup the camera you already turned down the quality to the 70% (because based on the Android Documentation the range of jpeq-quality is between 0-100)
And then you also need to check is the size of the camera is correct or not, because you are making assumption with that code
you can try this code to get the best preview size with your preffered width and height
private Camera.Size getBestPreviewSize(int width, int height, Camera.Parameters parameters){
Camera.Size bestSize = null;
List<Camera.Size> sizeList = parameters.getSupportedPreviewSizes();
bestSize = sizeList.get(0);
for(int i = 1; i < sizeList.size(); i++){
if((sizeList.get(i).width * sizeList.get(i).height) >
(bestSize.width * bestSize.height)){
bestSize = sizeList.get(i);
}
}
return bestSize;
}
I hope this answer will help you, if you have another question about my answer you can try to ask me in the comment :)