I am trying to stream JPEG frames from the camera to my pc using a udp socket but I am running into some issues.
So I set up a camera and added a callback for the preview frame event:
#Override
public void onPreviewFrame(byte[] data, Camera camera)
{
final YuvImage image = new YuvImage(data, mPreviewFormat, mPreviewWidth, mPreviewHeight, null); //Create the Yuv image
image.compressToJpeg(mPreviewRect, 80, stream); //Compress to JPEG
Bitmap b = BitmapFactory.decodeByteArray(stream.toByteArray(), 0, stream.size()); //Convert to Bitmap
Bitmap resizedBitmap = Bitmap.createScaledBitmap(b, 320, 240, false); //Scale to 320x240
resizedBitmap.compress(Bitmap.CompressFormat.JPEG, 100, stream); //Compress back to JPEG
byte[] byteArray = stream.toByteArray();
DatagramPacket sendPacket = new DatagramPacket(byteArray, byteArray.length, IPAddress, 37654);
try
{
socket.send(sendPacket); //Send frame to address
}
catch (IOException e)
{
e.printStackTrace();
}
stream.reset();
}
My problem is that this is taking about 0.2 seconds so my frame rate is about 5 FPS. Is there any way I can speed this up my target FPS is anywhere from 15 to 20 FPS. From my timing tests I believe that the problem may be with Bitmap b = BitmapFactory.decodeByteArray(stream.toByteArray(), 0, stream.size()); It seems to be taking the longest time about 0.1 seconds. Is there a way to scale a Yuv image directly?
Thanks!
I've done something similar and ended up moving that into native code and made a JNI call to convert from NV21 and stick onto a circular buffer.
Have a 2nd thread read off the buffer and do the network IO so you can return from onPreviewFrame as quickly as possible.
Even better, copy the nv21 data to the circular buffer and have that 2nd thread do the jpg conversion before sending the data on the network.
by the way, I did this for a video chat client. It worked well but I've since moved to sending H.263 frames. It's more efficient than sending JPGs if you are also doing a video chat client.
Related
I'm developing a sort of remote desktop application, where the Android client is able to show the screen of the desktop (and control it as well).
The problem is streaming the video. My solution was to capture screenshots using the Java Robot class, resized them according to the target device resolution and send them over DatagramPackets to the android client. Then I found out that raw Images were to big to be send over a single UDP packet (because it was over 62KB) and so it would give me one of these.
IOException: Message too long
So I compressed the images to JPEG with a quality setting of 0.25 until I was finally able get the size of the frames reasonably below 62KB. But even still the image sizes sometimes gets too large and cause the frame to be not sent. The code for this at present is:
#Override
public void run() {
while (running) {
try {
BufferedImage screenshot = robot.createScreenCapture(
new Rectangle(Toolkit.getDefaultToolkit().getScreenSize()));
BufferedImage resizedSS = Scalr.resize(screenshot,
Scalr.Method.BALANCED, screenSizeX, screenSizeY);
byte[] imageBuff = compressToJPEG(resizedSS, 0.25f);
DatagramPacket packet = new DatagramPacket(imageBuff, imageBuff.length, address, port);
socket.send(packet);
} catch (IOException e) {
LOGGER.log(Level.SEVERE, "LiveScreenSender.run: error while sending screenshot");
e.printStackTrace();
break;
}
}
}
The code for JPEG compression (note: this is a modified version of code I found somewhere on SO, and don't fully understand how it works)
private static byte[] compressToJPEG(BufferedImage image, float quality) throws IOException {
ImageWriter writer = ImageIO.getImageWritersByFormatName("jpg").next();
ByteArrayOutputStream out = new ByteArrayOutputStream();
ImageOutputStream ios = ImageIO.createImageOutputStream(out);
writer.setOutput(ios);
ImageWriteParam param = writer.getDefaultWriteParam();
// compress to a given quality
param.setCompressionMode(ImageWriteParam.MODE_EXPLICIT);
param.setCompressionQuality(quality);
writer.write(null, new IIOImage(image, null, null), param);
ios.close();
writer.dispose();
return out.toByteArray();
}
The client (Android) part also being fairly similar
#Override
public void run() {
while (running) {
try {
socket.receive(dataBuffPacket);
if (dataBuffPacket.getLength() == 0) break;
Bitmap bitmap = BitmapFactory.decodeByteArray(dataBuffPacket.getData(), 0, dataBuffPacket.getLength());
onFrameReceive(bitmap);
} catch (IOException e) {
e.printStackTrace();
}
}
}
Now if UDP packet lengths are limited, what is the conventional way of live video streaming using UDP. I need some suggestions for which direction to go to, for solving this. Any library recommendations would be most appreciated. As far as I've seen on the internet most questions and blogs pertain to streaming content stored on a server, not video generated in real time frame by frame.
My knowledge regarding networking and A/V playback is fairly limited so I would appreciate good tutorials or links as to where I might be able to learn these.
I'm in the draw loop of an android view:
Bitmap bitmap = Bitmap.createBitmap(this.getWidth(),
this.getHeight(), Bitmap.Config.ARGB_4444);
Canvas newCanvas = new Canvas(bitmap);
super.draw(newCanvas);
Log.d("AndroidUnity","Canvas Drawn!");
mImageView.setImageBitmap(bitmap);
And the above code shows me the correct drawing on the attached Image Viewer.
When I convert the bitmap to a byte array:
ByteBuffer byteBuffer = ByteBuffer.allocate(bitmap.getByteCount());
bitmap.copyPixelsToBuffer(byteBuffer);
byte[] bytes = byteBuffer.array();
importing the bytes into Unity does not work (shows a black image on my rawimage):
imageTexture2D = new Texture2D(width, height, TextureFormat.ARGB4444, false);
imageTexture2D.LoadRawTextureData(bytes);
imageTexture2D.Apply();
RawImage.texture = imageTexture2D;
Any ideas on how to get the Java bytes[] to display as a texture/image in Unity? I've tested that the bytes are sending correctly, i.e. when I push a byte array of {1,2,3,4} from android, I get {1,2,3,4} on the unity side.
this isn't mentioning that Unity throws an error when trying to transfer the bytes as a byte[], so instead I have to follow this advice, on the C# side:
void ReceieveAndroidBytes(AndroidJavaObject jo){
AndroidJavaObject bufferObject = jo.Get<AndroidJavaObject>("Buffer");
byte[] bytes = AndroidJNIHelper.ConvertFromJNIArray<byte[]>(bufferObject.GetRawObject()); }
and a trivial byte[] container class "Buffer" on the java side
I was trying to do the exact same thing and my initial attempts also had a black texture. I do the array conversion with AndroidJNIHelper.ConvertFromJNIArray like you do except I used sbyte[] instead of byte[]. To set the actual image data I ended up using
imageTexture2D.SetPixelData(bytes, 0);
If I'm not mistaken LoadRawTextureData is even rawer than an array of pixel data, it might be how graphics cards store textures with compression. If that is true then raw pixel data isn't in the right format and it can't be decoded.
I want to take only a part of the the screen data from a preview video callback to reduce the time of the process. The probleme is I only know how to take the whole screen with OnPreviewFrame:
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
myData = data;
// +get camera resolution x, y
}
And then with this data get the image :
private Bitmap getBitmapFromYUV(byte[] data, int width, int height)
{
ByteArrayOutputStream out = new ByteArrayOutputStream();
YuvImage yuvImage = new YuvImage(data, ImageFormat.NV21, width, height, null);
yuvImage.compressToJpeg(new Rect(0, 0, width, height), 100, out);
byte[] imageBytes = out.toByteArray();
Bitmap image = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
return image;
}
And then I take the part of the image taken I want :
cutImage = Bitmap.createBitmap(image, xOffset, yOffset, customWidth, customHeight);
The problem is that I need to take lots of images to apply some image processing on it and that's why I want to reduce the time it takes to get the images. Instead of taking the whole screen and then crop it, I want to immediatly get the cropped image. Is there a way to get the part of the screen data ?
Ok I finally found something, I still record all the data of the camera but when using compressToJpeg I crop the picture with a custom Rect. Maybe there is something better to do before this but this is still a good improvement. Here are my changes :
yuvImage.compressToJpeg(new Rect(offsetX, offsetY, sizeCaptureX + offsetX, sizeCaptureY + offsetY ), 100, out);
I have a camera sending frames to a SurfaceView. I would like to get these frames out of the surface view and send them elsewhere. In their final form, the images must be in JPEG format. To accomplish this currently, I am creating a YUV image from the byte[] and then calling compressToJpeg. However, when I invoke compressToJpeg on every frame rather than doing nothing but displaying it, my FPS goes from ~30 to ~4. I commented out the other lines and this function appears to be the culprit.
public void onNewRawImage(byte[] data, Size size) {
// Convert to JPG
YuvImage yuvimage=new YuvImage(data,
ImageFormat.NV21, size.width, size.height, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvimage.compressToJpeg(new Rect(0, 0, yuvimage.getWidth(),
yuvimage.getHeight()), 80, baos);
byte[] jdata = baos.toByteArray();
// Convert to Bitmap
Bitmap bitmap = BitmapFactory.decodeByteArray(jdata, 0, jdata.length);
}
Is it possible to start in the JPEG format rather than having to convert to it? I am hoping I am making a mistake somewhere. Any help is greatly appreciated, thank you.
Currently I am using the compress method to save an image taken with the camera hardware on the android phone to the SD card.
try {
BitmapFactory.Options options=new BitmapFactory.Options();
options.inSampleSize = 10;
Bitmap myImage = BitmapFactory.decodeByteArray(imageData, 0,
imageData.length,options);
fileOutputStream = new FileOutputStream(
sdImageMainDirectory.toString() +"/"+fileName+".png");
BufferedOutputStream bos = new BufferedOutputStream(
fileOutputStream);
myImage.compress(CompressFormat.PNG, 100, bos);
bos.flush();
bos.close();
Now this works perfectly fine, however, the quality of image it saves hardly makes it worth taking the picture in the first place. I'm looking for a better way to save the picture at a higher quality.
options.inSampleSize = 10;
here is your loss of quality: This will create an image of 1/10 in heigth and width
From the doc:
If set to a value > 1, requests the decoder to subsample the original
image, returning a smaller image to save memory. The sample size is
the number of pixels in either dimension that correspond to a single
pixel in the decoded bitmap. For example, inSampleSize == 4 returns an
image that is 1/4 the width/height of the original, and 1/16 the
number of pixels. Any value <= 1 is treated the same as 1. Note: the
decoder will try to fulfill this request, but the resulting bitmap may
have different dimensions that precisely what has been requested.
Also, powers of 2 are often faster/easier for the decoder to honor.