PreviewCallback for Camera video - java

Can someone provide me the sample code or example for reading each frame of Camera video using PreviewCallback interface?
I want to get the frame without using surface as I want frame of the camera video first without displaying on the screen and then I can pass that frame to the CCLayer of cocos2dx.

This is a slow process and cannot process every frame of data delivered to the preview callback. It should only be used if your requirements for a live preview application make it impossible to display video from the camera or video from the camera with a stencil type overlay, and your display can tolerate only a few frames per second with some latency.
Some important steps:
Your Activity must extend Surface Holder, and you shouldn't start
the camera until the surface is created.
Preview only supports a specific size on each device (480 x 320 on the last device I checked - some time ago), but
your request will not cause any problems, it will just get ignored.
When you finally do receive the preview callback, check the actual
frame size in onPreviewFrame().
The byte[] array data sent to onPreviewFrame() is in YCbCr_422_SP
format. No other formats are available, even if you attempt to set
them. The data is described here: http://groups.google.com/group/android-developers/msg/d3b29d3ddc8abf9b
Don't try to decode the data in onPreviewFrame(). There isn't enough
time - you will hose the camera if you try to hold up the system in
that function for so long. Copy the data to your own buffer and
decode in a separate Thread.
You will have to skip processing of new frames received while your Thread
is processing any particular frame. Wait until the Thread is finished before
using the data from the next available frame.
Decode the YUV Preview Data:
// decode Y, U, and V values on the YUV 420 buffer
// described as YCbCr_422_SP by Android - David Manpearl
public static void decodeYUV(int[] out, byte[] fg, int width, int
height) throws NullPointerException, IllegalArgumentException {
final int sz = width * height;
if(out == null) throw new NullPointerException("buffer 'out' is null");
if(out.length < sz) throw new IllegalArgumentException("buffer 'out' size " + out.length + " < minimum " + sz);
if(fg == null) throw new NullPointerException("buffer 'fg' is null");
if(fg.length < sz) throw new IllegalArgumentException("buffer 'fg' size " + fg.length + " < minimum " + sz * 3/ 2);
int i, j;
int Y, Cr = 0, Cb = 0;
for(j = 0; j < height; j++) {
int pixPtr = j * width;
final int jDiv2 = j >> 1;
for(i = 0; i < width; i++) {
Y = fg[pixPtr]; if(Y < 0) Y += 255;
if((i & 0x1) != 1) {
final int cOff = sz + jDiv2 * width + (i >> 1) * 2;
Cb = fg[cOff];
if(Cb < 0) Cb += 127; else Cb -= 128;
Cr = fg[cOff + 1];
if(Cr < 0) Cr += 127; else Cr -= 128;
}
int R = Y + Cr + (Cr >> 2) + (Cr >> 3) + (Cr >> 5);
if(R < 0) R = 0; else if(R > 255) R = 255;
int G = Y - (Cb >> 2) + (Cb >> 4) + (Cb >> 5) - (Cr >> 1) + (Cr >> 3) + (Cr >> 4) + (Cr >> 5);
if(G < 0) G = 0; else if(G > 255) G = 255;
int B = Y + Cb + (Cb >> 1) + (Cb >> 2) + (Cb >> 6);
if(B < 0) B = 0; else if(B > 255) B = 255;
out[pixPtr++] = 0xff000000 + (B << 16) + (G << 8) + R;
}
}
}
Convert byte[] Array to Bitmap:
Bitmap bitmap = BitmapFactory.decodeByteArray(out , 0, out.length);

Related

Translating C library to Java: Getting mangled garbage data in top-left of resulting bitmap

The LWJGL3 library contains bindings to STB TrueType and other libraries made by Sean Barrett.
In order to modify the packing API provided by this library to render SDF glyphs into the backing texture instead of normal bitmaps, I am reproducing the texture-rendering code from the library in java.
I managed to get it to almost work but I am hitting a stumbling stone where I am getting mangled garbage data for the very top-left corner of the texture. I am somewhat confident that the error must be located somewhere in the code for my version of the stbtt__h_prefilter(...), as this is where the assertion fails.
Edit: I forgot to take into consideration the current buffer position when doing read/write operations on the buffer. Now I still have some garbage data in the bitmap, but it's more evenly distributed.
In fact looking at the updated second picture it seems that somehow the very left-most part of every glyph is shifted half the glyph height down. I cannot find out where or why it happens, especially considering that the bitmap processing works on each glyph individually after it is rendered into the font, so to my understanding the next line of glyphs should just overwrite this..?
Bitmap generated by the original library:
Bitmap generated by my version (see the offset half-lines cutting into some letters):
Addendum: Bitmap generated by my version without the prefilter_... methods:
Below you find my versions of the methods from the library. The originals can be found here.
The references to STB... functions refer to the generated bindings form lwjgl3.
private static boolean packFontRangesRenderIntoRectsSDF(
STBTTPackContext context, STBTTFontinfo fontinfo,
STBTTPackRange.Buffer ranges, STBRPRect.Buffer rects) {
int i, j, k;
boolean returnValue = true;
int curr_hOversample = context.h_oversample();
int curr_vOversample = context.v_oversample();
k = 0;
for(i = 0 ; i < ranges.remaining() ; i++) {
float fh = ranges.get(i).font_size();
float scale = fh > 0.0f ? stbtt_ScaleForPixelHeight(fontinfo, fh) : stbtt_ScaleForMappingEmToPixels(fontinfo, -fh);
float recip_h, recip_v, sub_x, sub_y;
curr_hOversample = STBTTPackRange.nh_oversample(ranges.get(i).address()) & 0xFF;
curr_vOversample = STBTTPackRange.nv_oversample(ranges.get(i).address()) & 0xFF;
recip_h = 1.0f / (float)curr_hOversample;
recip_v = 1.0f / (float)curr_vOversample;
sub_x = __oversample_shift(curr_hOversample);
sub_y = __oversample_shift(curr_vOversample);
for(j = 0 ; j < ranges.get(i).num_chars() ; j++) {
STBRPRect r = rects.get(k);
if(r.was_packed()) {
STBTTPackedchar bc = ranges.get(i).chardata_for_range().get(j);
IntBuffer advance = ByteBuffer.allocateDirect(Integer.BYTES)
.order(ByteOrder.nativeOrder())
.asIntBuffer();
IntBuffer lsb = ByteBuffer.allocateDirect(Integer.BYTES)
.order(ByteOrder.nativeOrder())
.asIntBuffer();
IntBuffer x0 = ByteBuffer.allocateDirect(Integer.BYTES)
.order(ByteOrder.nativeOrder())
.asIntBuffer();
IntBuffer x1 = ByteBuffer.allocateDirect(Integer.BYTES)
.order(ByteOrder.nativeOrder())
.asIntBuffer();
IntBuffer y0 = ByteBuffer.allocateDirect(Integer.BYTES)
.order(ByteOrder.nativeOrder())
.asIntBuffer();
IntBuffer y1 = ByteBuffer.allocateDirect(Integer.BYTES)
.order(ByteOrder.nativeOrder())
.asIntBuffer();
int codepoint = ranges.get(i).array_of_unicode_codepoints() == null ? ranges.get(i).first_unicode_codepoint_in_range() + j : ranges.get(i).array_of_unicode_codepoints().get(j);
int glyph = stbtt_FindGlyphIndex(fontinfo, codepoint);
int pad = context.padding();
r.x((short) (r.x() + pad));
r.y((short) (r.y() + pad));
r.w((short) (r.w() - pad));
r.h((short) (r.h() - pad));
stbtt_GetGlyphHMetrics(fontinfo, glyph, advance, lsb);
stbtt_GetGlyphBitmapBox(fontinfo, glyph,
scale * curr_hOversample,
scale * curr_vOversample,
x0, y0, x1, y1);
//TODO replace below with SDF func
ByteBuffer buff = context.pixels(context.height() * context.width());
buff.position(r.x() + r.y() * context.stride_in_bytes());
stbtt_MakeGlyphBitmapSubpixel(fontinfo, buff,
r.w() - curr_hOversample + 1,
r.h() - curr_vOversample + 1,
context.stride_in_bytes(),
scale * curr_hOversample,
scale * curr_vOversample,
0, 0,
glyph);
if(curr_hOversample > 1) {
//FIXME __h_prefilter(..) function
buff.position(r.x() + r.y() * context.stride_in_bytes());
__h_prefilter(buff,
r.w(), r.h(), context.stride_in_bytes(),
curr_hOversample);
}
if(curr_vOversample > 1) {
//FIXME __v_prefilter(..) function
buff.position(r.x() + r.y() * context.stride_in_bytes());
__v_prefilter(buff,
r.w(), r.h(), context.stride_in_bytes(),
curr_vOversample);
}
bc.x0(r.x());
bc.y0(r.y());
bc.x1((short) (r.x() + r.w()));
bc.y1((short) (r.y() + r.h()));
bc.xadvance(scale * advance.get(0));
bc.xoff((float) (x0.get(0) * recip_h + sub_x));
bc.yoff((float) (y0.get(0) * recip_v + sub_y));
bc.xoff2((x0.get(0) + r.w()) * recip_h + sub_x);
bc.yoff2((y0.get(0) + r.h()) * recip_v + sub_y);
} else {
returnValue = false;
}
++k;
}
}
return returnValue;
}
//copy of stbtt__oversample_shift(..) as it's inaccessible
private static float __oversample_shift(int oversample) {
if(oversample == 0) {
return 0.0f;
}
return (float)-(oversample - 1) / (2.0f * (float)oversample);
}
private static final int MAX_OVERSAMPLE = 8;
private static final int __OVER_MASK = MAX_OVERSAMPLE - 1;
private static void __h_prefilter(ByteBuffer pixels, int w, int h, int stride_in_bytes, int kernel_width) {
final int pixels_offset = pixels.position();
int pixelstride = 0;
byte[] buffer = new byte[MAX_OVERSAMPLE];
int safe_w = w - kernel_width;
int j;
Arrays.fill(buffer, 0, MAX_OVERSAMPLE, (byte)0);
for(j = 0 ; j < h ; j++) {
int i;
int total;
Arrays.fill(buffer, 0, kernel_width, (byte)0);
total = 0;
for(i = 0 ; i <= safe_w ; i++) {
total += Byte.toUnsignedInt(pixels.get(pixels_offset + (pixelstride + i))) - Byte.toUnsignedInt(buffer[i & __OVER_MASK]);
buffer[(i + kernel_width) & __OVER_MASK] = pixels.get(pixels_offset + (pixelstride + i));
pixels.put(pixels_offset + (pixelstride + i), (byte) Integer.divideUnsigned(total, kernel_width));
}
for(; i < w ; ++i) {
// if(Byte.toUnsignedInt(pixels.get(pixels_offset + (pixelstride + i))) != 0) {
// throw new RuntimeException("Expected '0' but was '" + Byte.toUnsignedInt(pixels.get(pixels_offset + (pixelstride + i))) + "'");
// }
total -= Byte.toUnsignedInt(buffer[i & __OVER_MASK]);
pixels.put(pixels_offset + (pixelstride + i), (byte) Integer.divideUnsigned(total, kernel_width));
}
pixelstride += stride_in_bytes;
}
}
private static void __v_prefilter(ByteBuffer pixels, int w, int h, int stride_in_bytes, int kernel_width) {
final int pixels_offset = pixels.position();
int pixelstride = 0;
byte[] buffer = new byte[MAX_OVERSAMPLE];
int safe_h = h - kernel_width;
int j;
Arrays.fill(buffer, 0, MAX_OVERSAMPLE, (byte)0);
for(j = 0 ; j < w ; j++) {
int i;
int total;
Arrays.fill(buffer, 0, kernel_width, (byte)0);
total = 0;
for(i = 0 ; i <= safe_h ; i++) {
total += Byte.toUnsignedInt(pixels.get(pixels_offset + ((pixelstride + i) * stride_in_bytes))) - Byte.toUnsignedInt(buffer[i & __OVER_MASK]);
buffer[(i + kernel_width) & __OVER_MASK] = pixels.get(pixels_offset + ((pixelstride + i) * stride_in_bytes));
pixels.put(pixels_offset + ((pixelstride + i) * stride_in_bytes), (byte) Integer.divideUnsigned(total, kernel_width));
}
for(; i < h ; ++i) {
// if(Byte.toUnsignedInt(pixels.get(pixels_offset + ((pixelstride + i) * stride_in_bytes))) != 0) {
// throw new RuntimeException("Expected '0' but was '" + Byte.toUnsignedInt(pixels.get(pixels_offset + ((pixelstride + i) * stride_in_bytes))) + "'");
// }
total -= Byte.toUnsignedInt(buffer[i & __OVER_MASK]);
pixels.put(pixels_offset + ((pixelstride + i) * stride_in_bytes), (byte) Integer.divideUnsigned(total, kernel_width));
}
pixelstride += 1;
}
}
It seems to work out fine when I remove the offset from the __v_prefilter(..) method.
Thus changing final int pixels_offset = pixels.position(); to final int pixels_offset = 0; (or removing it altogether from the code).
I say it seems because I have not done any bitwise comparisons of the produced maps between my, now working, and the original code. There are just no, to me at least, discernible mangled bits in the texture anymore.

Encoding yuv frames to video file in java

I am trying to encode a video in java.
I have access to the separate frames as I420 yuv frames (these come from a different part of the program that I cannot change).
I basically have 3 bytebuffers for the different planes of a frame (+ dimensions).
As far as I understand, my format has 1 byte for the y-plane, and half a byte for u and v each, per pixel.
What is the best way to encode these into an mp4 video file?
I have tried with the xuggler API, but I can't seem to find a way to use the yuv frames directly.
Right now, I would convert them to a BufferedImage (TYPE_3BYTE_BGR) first before I can use them with the xuggler api to encode them to a video.
But this creates a huge overhead (I have to convert the yuv data to rgb for each pixel) and is unnecessary, as xuggler encodes them to yuv frames again to store them in a video file? (Not sure about this.)
So is there any easier way to encode raw yuv-frames to a video file directly in java?
Thanks for any pointers.
The way your are planning this seems correct (manual conversion), as long as xuggler didn't re-encode the frame after you.
I have done this conversion with both python and C, the process is still the same as yours (frame by frame, looping the pixels). In Java, this could look like :
public class YUV2RGB
{
public static void convert(int[] argb, byte[] yuv, int width, int height)
{
final int frameSize = width * height;
final int ii = 0;
final int ij = 0;
final int di = +1;
final int dj = +1;
int a = 0;
int y, v, u, r, g, b;
for (int i = 0, ci = ii; i < height; ++i, ci += di)
{
for (int j = 0, cj = ij; j < width; ++j, cj += dj)
{
y = (0xff & ((int) yuv[ci * width + cj]));
v = (0xff & ((int) yuv[frameSize + (ci >> 1) * width
+ (cj & ~1) + 0]));
u = (0xff & ((int) yuv[frameSize + (ci >> 1) * width
+ (cj & ~1) + 1]));
y = y < 16 ? 16 : y;
// METHOD 1 [slower, less accurate]
/*
* r = y + (int) 1.402f * v; g = y - (int) (0.344f * u + 0.714f
* * v); b = y + (int) 1.772f * u; r = r > 255 ? 255 : r < 0 ? 0
* : r; g = g > 255 ? 255 : g < 0 ? 0 : g; b = b > 255 ? 255 : b
* < 0 ? 0 : b; argb[a++] = 0xff000000 | (b<<16) | (g<<8) | r;
*/
// METHOD 2
r = (int) (1.164f * (y - 16) + 1.596f * (v - 128));
g = (int) (1.164f * (y - 16) - 0.813f * (v - 128) - 0.391f * (u - 128));
b = (int) (1.164f * (y - 16) + 2.018f * (u - 128));
r = r < 0 ? 0 : (r > 255 ? 255 : r);
g = g < 0 ? 0 : (g > 255 ? 255 : g);
b = b < 0 ? 0 : (b > 255 ? 255 : b);
argb[a++] = 0xff000000 | (r << 16) | (g << 8) | b;
}
}
}
}
Sample code from https://github.com/jyanik/Mocobar/blob/master/Mocobar/src/com/yanik/mocobar/camera/YUV2RGB.java
I can´t remember the source of this code, probably it was from a question here in SO, but comes in handy here as it´s a version of the above using integer maths, I needed it for an Android project!
//Method from Ketai project! Not mine! See below...
void decodeYUV420SP(int[] rgb, byte[] yuv420sp, int width, int height) {
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0)
y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0) r = 0;
else if (r > 262143)
r = 262143;
if (g < 0) g = 0;
else if (g > 262143)
g = 262143;
if (b < 0) b = 0;
else if (b > 262143)
b = 262143;
rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00) | ((b >> 10) & 0xff);
}
}
}

decodeyuv420sp single pixel version

I'm trying to convert several pixels from a YUV (nv21) image format to RGB format(yes, just some pixels, not the whole image because the run time constrains)
Currently I'm using the decodeyuv420SP function from internet:
static public void decodeYUV420SP(int[] rgb, byte[] yuv420sp, int width, int height) {
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0) y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0) r = 0; else if (r > 262143) r = 262143;
if (g < 0) g = 0; else if (g > 262143) g = 262143;
if (b < 0) b = 0; else if (b > 262143) b = 262143;
rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00) | ((b >> 10) & 0xff);
}
}
}
Now I want to make some modification s.t it can return the R,G,B values of a specific pixel(X,Y),
but I can't understand what exactly it does, looks like a YUV pixel corresponds to more than one RGB pixel. Can someone help me with this issue?
Thank you!
This is just quick and dirty, but I think it will work. All I'm doing is removing the loops and replacing them with the given X,Y (represented herein as "row" and "column" because we're using "y" to mean something else).
int uvp = frameSize + (row >> 1) * width, u = 0, v = 0;
int yp = row*width + column;
int y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0) y = 0;
if ((column & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0) r = 0; else if (r > 262143) r = 262143;
if (g < 0) g = 0; else if (g > 262143) g = 262143;
if (b < 0) b = 0; else if (b > 262143) b = 262143;
result = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00) | ((b >> 10) & 0xff);
Hope it helps.

Convert raw data to jpeg

I have found a way on how to convert raw data to jpeg but I have some issues with it.
My app takes a picture on the current frame (onPreviewFrame) and has the raw data in a bytearray.
First of all, the code I found is only supported by android API 7+ (Android 2.1+). I want this app to be able to use since API 4+ so android 1.6 users can also enjoy the app.
Second thing is that I have found some code to convert raw2jpg, but it is copyright protected so I can't use it.
I want to put it in a bytearray, so I won't use takePicture with it, remember.
Does anyone have an idea or some code snippet that I can use on how to convert raw data taken on a current frame to make a jpeg image in a bytearray from Android version 1.6?
EDIT: Here is the code:
private void raw2jpg(int[] rgb, byte[] raw, int width, int height)
{
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++)
{
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++)
{
int y=0;
if( yp < raw.length)
{
y = (0xff & ((int) raw[yp])) - 16;
}
// int y = (0xff & ((int) raw[yp])) - 16;
if (y < 0) y = 0;
if ((i & 1) == 0)
{
if(uvp<raw.length)
{
v = (0xff & raw[uvp++]) - 128;
u = (0xff & raw[uvp++]) - 128;
}
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0) r = 0; else if (r > 262143) r = 262143;
if (g < 0) g = 0; else if (g > 262143) g = 262143;
if (b < 0) b = 0; else if (b > 262143) b = 262143;
rgb[yp] = 0xff000000 | ((r << 6) &
0xff0000) | ((g >> 2) &
0xff00) | ((b >> 10) &
0xff);
}
}
}

Getting frames from Video Image in Android

I've implemented a simple application which shows the camera picture on the screen. What I like to do now is grab a single frame and process it as bitmap.
From what I could find out to this point it is not an easy thing to do.
I've tried using the onPreviewFrame method with which you get the current frame as a byte array and tried to decode it with the BitmapFactory class but it returns null.
The format of the frame is a headerless YUV which could be translated to bitmap but it takes too long on a phone. Also I've read that the onPreviewFrame method has contraints on the runtime, if it takes too long the application could crash.
So what is the right way to do this?
Ok what we ended up doing is using the onPreviewFrame method and decoding the data in a seperate Thread using a method which can be found in the android help group.
decodeYUV(argb8888, data, camSize.width, camSize.height);
Bitmap bitmap = Bitmap.createBitmap(argb8888, camSize.width,
camSize.height, Config.ARGB_8888);
...
// decode Y, U, and V values on the YUV 420 buffer described as YCbCr_422_SP by Android
// David Manpearl 081201
public void decodeYUV(int[] out, byte[] fg, int width, int height)
throws NullPointerException, IllegalArgumentException {
int sz = width * height;
if (out == null)
throw new NullPointerException("buffer out is null");
if (out.length < sz)
throw new IllegalArgumentException("buffer out size " + out.length
+ " < minimum " + sz);
if (fg == null)
throw new NullPointerException("buffer 'fg' is null");
if (fg.length < sz)
throw new IllegalArgumentException("buffer fg size " + fg.length
+ " < minimum " + sz * 3 / 2);
int i, j;
int Y, Cr = 0, Cb = 0;
for (j = 0; j < height; j++) {
int pixPtr = j * width;
final int jDiv2 = j >> 1;
for (i = 0; i < width; i++) {
Y = fg[pixPtr];
if (Y < 0)
Y += 255;
if ((i & 0x1) != 1) {
final int cOff = sz + jDiv2 * width + (i >> 1) * 2;
Cb = fg[cOff];
if (Cb < 0)
Cb += 127;
else
Cb -= 128;
Cr = fg[cOff + 1];
if (Cr < 0)
Cr += 127;
else
Cr -= 128;
}
int R = Y + Cr + (Cr >> 2) + (Cr >> 3) + (Cr >> 5);
if (R < 0)
R = 0;
else if (R > 255)
R = 255;
int G = Y - (Cb >> 2) + (Cb >> 4) + (Cb >> 5) - (Cr >> 1)
+ (Cr >> 3) + (Cr >> 4) + (Cr >> 5);
if (G < 0)
G = 0;
else if (G > 255)
G = 255;
int B = Y + Cb + (Cb >> 1) + (Cb >> 2) + (Cb >> 6);
if (B < 0)
B = 0;
else if (B > 255)
B = 255;
out[pixPtr++] = 0xff000000 + (B << 16) + (G << 8) + R;
}
}
}
Link: http://groups.google.com/group/android-developers/browse_thread/thread/c85e829ab209ceea/3f180a16a4872b58?lnk=gst&q=onpreviewframe#3f180a16a4872b58
In API 17+, you can do conversion to RGBA888 from NV21 with the 'ScriptIntrinsicYuvToRGB' RenderScript. This allows you to easily process preview frames without manually encoding/decoding frames:
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Bitmap bitmap = Bitmap.createBitmap(r.width(), r.height(), Bitmap.Config.ARGB_8888);
Allocation bmData = renderScriptNV21ToRGBA888(
mContext,
r.width(),
r.height(),
data);
bmData.copyTo(bitmap);
}
public Allocation renderScriptNV21ToRGBA888(Context context, int width, int height, byte[] nv21) {
RenderScript rs = RenderScript.create(context);
ScriptIntrinsicYuvToRGB yuvToRgbIntrinsic = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs));
Type.Builder yuvType = new Type.Builder(rs, Element.U8(rs)).setX(nv21.length);
Allocation in = Allocation.createTyped(rs, yuvType.create(), Allocation.USAGE_SCRIPT);
Type.Builder rgbaType = new Type.Builder(rs, Element.RGBA_8888(rs)).setX(width).setY(height);
Allocation out = Allocation.createTyped(rs, rgbaType.create(), Allocation.USAGE_SCRIPT);
in.copyFrom(nv21);
yuvToRgbIntrinsic.setInput(in);
yuvToRgbIntrinsic.forEach(out);
return out;
}
I actually tried the code given the previous answer found that the Colorvalues are not exact. I checked it by taking both the preview and the camera.takePicture which directly returns a JPEG array. And the colors were very different. After a little bit more searching I found another example to convert the PreviewImage from YCrCb to RGB:
static public void decodeYUV420SP(int[] rgb, byte[] yuv420sp, int width, int height) {
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0) y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0) r = 0; else if (r > 262143) r = 262143;
if (g < 0) g = 0; else if (g > 262143) g = 262143;
if (b < 0) b = 0; else if (b > 262143) b = 262143;
rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00) | ((b >> 10) & 0xff);
}
}
}
The color values given by this and the takePicture() exactly match. I thought I should post it here.
This is where I got this code from.
Hope this helps.
Tim's RenderScript solution is great. Two comments here though:
Create and reuse RenderScript rs, and Allocation in, out. Creating them every frame will hurt the performance.
RenderScript support library can help you back support to Android 2.3.
I don't see any of the answers better in performance than the built-in way to convert it.
You can get the bitmap using this.
Camera.Parameters params = camera.getParameters();
Camera.Size previewsize = params.getPreviewSize();
YuvImage yuv = new YuvImage(data, ImageFormat.NV21, previewsize.width, previewsize.height, null);
ByteArrayOutputStream stream = new ByteArrayOutputStream();
yuv.compressToJpeg(new Rect(0,0,previewsize.width, previewsize.height), 100, stream);
byte[] buf = stream.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(buf, 0, buf.length);

Categories