I am trying to save to files the RGB frames along with Pose data, and then do some post processing on them. The main issue is that currently the only way to do this using the Tango Java API is to render to a GLSurfaceView, by connecting them via
tangoCameraPreview.connectToTangoCamera(mTango,TangoCameraIntrinsics.TANGO_CAMERA_COLOR);
Then you would use GlReadPixels to read the pixels into an array and save that to a file.
The problem with this is that GlReadPixels is slow. In fact, using this I am getting about 3-4 fps using what I just described.
Looking at other, more general answers on taking a photo bursts, I have seen various people saying that when using SurfaceView instead of GlSurfaceView, they managed to get up to 15fps.
I didn't find any way to use SurfaceView with the tango camera, since connectToTangoCamera needs a GlSurfaceView and I can't just use Camera and bind that to a SurfaceView because when I'm trying to open it (via Camera.open()), it's already being used by Tango. Tango needs to use it in order to get the colorToIMUPose data.
So I'm really not sure what workaround might I find in order to be able to get at least 10fps.
You could use the C++ API with TangoService_connectOnFrameAvailable where you will get a YUV frame buffer in a reasonable speed. Checkout the tango c example video-overlay-jni-example where they do the RGB conversion. I use this way to interface OpenCV filters to the rendering process.
Related
I'm trying to use code from this answer(the 1st one, the highest rated one):
Android - combine multiple images into one ImageView
After reading extensively the code I found out that the code was using BitmapFactory extensively.
I'm trying to integrate the code into a performance-priority project, and bitmap left me with the impression of being rather taxing on processors, which isn't really something I'm pleased with. I don't want this new part of the code slow everything down noticeably.
My code is already capable of resizing pngs, so I'm guessing either one of the following is likely be the case of the original author's application of BitmapFactory:
resizing pngs uses bitmap processing by default, just because I (author of this question, not of the code IN this question) did not explicitly call relevant functions doesn't mean it hasn't gotten actively involved;
The code also features capability of cutting and reshaping images so that is exclusively the part that needs BitmapFactory, BitmapFactory isn't really necessary if nothing beyond resizing is required.
The code's primary function is to combine multiple images inside a single imageView so to have that, BitmapFactory is instrumental in achieve just that (I've read the code but couldn't find enough evidence to support this assumption).
I need expert answer - a simple yes or no followed by clear elaboration. Thanks in advance. You are of course, welcome to point out my lapse of judgement when claiming that bitmap slows things down.
To answer my own question:
In this particular scenario, unfortunately I need to use bitmap to "map" (no pun intended) my target image, and then resize it and put it into the same ImageView with other images that went through the identical steps. Because I am trying to combine several images inside a single view, this is inevitable.
I'm starting a project for college that uses ImageJ and Micro Manager. I want to be able to access each pixel of the image taken from a snapshot of the camera feed. My goal is to be able to apply a custom built function to each pixel on the snapshot of the image. Is it possible to do this using ImageJ.
Yes you can, but at which level do you want to work: ImageJ macros or dive into the java code?
If you stay at the macros level, I don't think that you can access the pixel values.
However, if you want to do a little bit of programming, ImageJ uses the ImageProcessor class, which is extended according to the encoding type: ByteProcessor, ShortProcessor, BinaryProcessor, ColorProcessor. Then, you can access the pixel values using the methods getPixels(), getPixel(x,y), getPixelValue(x,y). But be careful to the encoding type, these methods are encoding specific.
I was looking at this class : https://developer.android.com/reference/android/media/ImageReader.html
What I want is to get an InputStream from the ImageReader class and get the image data which is being rendering into the current surface. Current Surface means the currently displaying android screen.
I looked at an example inside the AOSP code but it was really tough to understand.
Please it would be really helpful if someone provides me with a nice example of ImageReader class.
It sounds like you're trying to use ImageReader to pull data out of an arbitrary Surface. It doesn't work that way.
ImageReader is a buffer queue "CPU consumer", which is a fancy way of saying that when you send data to its Surface, code running on the CPU can examine the data directly. Not all surfaces can be handled this way -- the usage flags are set when the surface is created -- so you need to create the Surface with ImageReader#getSurface() and then pass that to something that wants to send data to a Surface.
As the documentation notes, "Several Android media API classes accept Surface objects as targets to render to, including MediaPlayer, MediaCodec, and RenderScript Allocations."
The primary customer for ImageReader is the camera, demonstrated in the CTS test you linked to. It doesn't really work with MediaCodec decoders yet because the output formats are different from camera (and some of them are proprietary).
The virtual display test is closer to what you want, because you can direct the output of a virtual display to a Surface. For an app with normal privileges you can only send your own app's output there. The Android 4.4 "screenrecord" command does something similar.
(If you want to record what you're drawing on a SurfaceView, and you're rendering with OpenGL, there are some alternatives demonstrated here and here.)
I would like to know if it is possible with Unity3D to have an object with the following properties, consider a human for example:
Leg length can be set when you create the object in your game.
Arm length can be set aswell.
It can be set for every part of your model.
Ability to use all joints of bones of your model programatically.
Also is it possible to run Unity3D from Java? Or what would be the best way to get around with Unity3D having a Java background.
Regards.
There are all sorts of Joints available in Unity3D which you can use with Javascript quite effectively. Check out Character, Fixed and Hinge joints - try adding those components to your gameobject and tweaking their values in the inspector. You can connect items / bones with these and set things like max length, bounciness, breakage points, and more. Once you are comforatable with these you can then add them through code using AddComponent.
As far as scaling your objects you can achieve this by making each joint / portion of the figure a different gameobject, and hold them together in a parent object. Then, you can go through the children of the parents and adjust the local scale of each child using this.
Im using C# script with my unity project, from a Java background, and I found it pretty easy to pick up -- in general I've thought C# is pretty similar to Java, and C# script I've found to be pretty similar to C#.
I've been working on this problem for a while now. I'm working on a project where I recieve a stream of images which have been processed by OpenCV on a socket, and I need to display these feeds in MT4J. We are transferring the images inside of a Google Protocol Buffer message. In my current (and naive) approach, I grab images off of the socket and simply set them as a the texture in an overloaded draw method from MTRectangle. While this works, if I attempt to display more than 3 feeds at a time, the frame rate drops to an unacceptable rate (<1 fps), and it takes up ~70-80% of my CPU.
A co-worker managed to use GStreamer/Xuggler to display about 20 videos in MT4J, but was using static files instead of a real-time stream. Because of this, I attempted to find a tool in MT4J or Xuggler which would allow me to convert the images to a video stream on the fly, but everything seems to simply be converting a collection of images on the disk to a stream, or vice-versa, but not in real time.
So at this point I see two possible solutions. First, is there a more efficient way to set/draw textures in MT4J? Secondly, is there some tool that exists/some library in GStreamer or Xuggler which would do the image to video conversion in real time?
MT4J website for people who don't know what it is:
http://www.mt4j.org
Thanks in advance, and let me know if there is any information which I left out.