I was looking at this class : https://developer.android.com/reference/android/media/ImageReader.html
What I want is to get an InputStream from the ImageReader class and get the image data which is being rendering into the current surface. Current Surface means the currently displaying android screen.
I looked at an example inside the AOSP code but it was really tough to understand.
Please it would be really helpful if someone provides me with a nice example of ImageReader class.
It sounds like you're trying to use ImageReader to pull data out of an arbitrary Surface. It doesn't work that way.
ImageReader is a buffer queue "CPU consumer", which is a fancy way of saying that when you send data to its Surface, code running on the CPU can examine the data directly. Not all surfaces can be handled this way -- the usage flags are set when the surface is created -- so you need to create the Surface with ImageReader#getSurface() and then pass that to something that wants to send data to a Surface.
As the documentation notes, "Several Android media API classes accept Surface objects as targets to render to, including MediaPlayer, MediaCodec, and RenderScript Allocations."
The primary customer for ImageReader is the camera, demonstrated in the CTS test you linked to. It doesn't really work with MediaCodec decoders yet because the output formats are different from camera (and some of them are proprietary).
The virtual display test is closer to what you want, because you can direct the output of a virtual display to a Surface. For an app with normal privileges you can only send your own app's output there. The Android 4.4 "screenrecord" command does something similar.
(If you want to record what you're drawing on a SurfaceView, and you're rendering with OpenGL, there are some alternatives demonstrated here and here.)
Related
I'm starting a project for college that uses ImageJ and Micro Manager. I want to be able to access each pixel of the image taken from a snapshot of the camera feed. My goal is to be able to apply a custom built function to each pixel on the snapshot of the image. Is it possible to do this using ImageJ.
Yes you can, but at which level do you want to work: ImageJ macros or dive into the java code?
If you stay at the macros level, I don't think that you can access the pixel values.
However, if you want to do a little bit of programming, ImageJ uses the ImageProcessor class, which is extended according to the encoding type: ByteProcessor, ShortProcessor, BinaryProcessor, ColorProcessor. Then, you can access the pixel values using the methods getPixels(), getPixel(x,y), getPixelValue(x,y). But be careful to the encoding type, these methods are encoding specific.
I am trying to save to files the RGB frames along with Pose data, and then do some post processing on them. The main issue is that currently the only way to do this using the Tango Java API is to render to a GLSurfaceView, by connecting them via
tangoCameraPreview.connectToTangoCamera(mTango,TangoCameraIntrinsics.TANGO_CAMERA_COLOR);
Then you would use GlReadPixels to read the pixels into an array and save that to a file.
The problem with this is that GlReadPixels is slow. In fact, using this I am getting about 3-4 fps using what I just described.
Looking at other, more general answers on taking a photo bursts, I have seen various people saying that when using SurfaceView instead of GlSurfaceView, they managed to get up to 15fps.
I didn't find any way to use SurfaceView with the tango camera, since connectToTangoCamera needs a GlSurfaceView and I can't just use Camera and bind that to a SurfaceView because when I'm trying to open it (via Camera.open()), it's already being used by Tango. Tango needs to use it in order to get the colorToIMUPose data.
So I'm really not sure what workaround might I find in order to be able to get at least 10fps.
You could use the C++ API with TangoService_connectOnFrameAvailable where you will get a YUV frame buffer in a reasonable speed. Checkout the tango c example video-overlay-jni-example where they do the RGB conversion. I use this way to interface OpenCV filters to the rendering process.
Im looking for a easy way to record video (including audio) into a circular buffer stored in RAM.
So I can leave the video recording it will keep the feed in RAM the last 2 minutes and then have the option to commit it to memory if required.
Unfortunately, I cannot see an easy way to do this. So far I have investigated using:
MediaRecorder - I could not see a way to store the output data in a
buffer. Only option is setOutputFile()
JavaCV FFmpegFrameRecorder - again the constructor for this requires
passing in a file.
android.hardware.Camera.PreviewCallback - this gives a byte array for
every frame which I could add to a buffer. However, this approach does not provide any audio data.
I feel like there must be a easy way to do this but so far I've not had much luck.
Any help with this would be very appreciated.
JavaCV offers cvCreateCameraCapture() to open an interface with the camera. This code demonstrates how to capture frames from the camera and display them on a window.
The interesting part resides in the while loop: every iteration of loop retrieves a single frame from the camera with cvQueryFrame(). Note that the data type returned by this function is IplImage, and this type is responsible for storing the image grabbed from the camera.
All you need to do is store the IplImage returned by cvQueryFrame() in your circular buffer.
I'm new to adaptive bit rate streaming. Basically I'm trying to write an app that shows information about the quality of the connection on an Android device.
Since HoneyComb(3.0), Android supports adaptive bit rate streaming through HTTP Live Streaming (HLS). It seams like support for helping developers verify the quality of this connection device side is very limited.
What I would like to know is some low level information about the stream. Such as: the number of segments, the segment duration, number of requests to change bit rate, the bit rate the media player sees (to facilitate the change), etc.
I've been able to get some information about stream quality from the MediaPlayer, MediaController, MediaMetaDataRetriever, CamcorderProfile, MediaFormat, MediaExtractor classes. However, the stuff I'm looking for is even lower level. If possible I'd like to be able to actually see how the player is communicating with the server.
I just started looking at the MediaCodec class, however I can't figure out how to get the MediaCodec from a mediaplayer. Or Maybe I just don't know how to use this properly as I cannot find any good documentation and examples.
Does anyone know if it is possible to access the low level information on the Android that I'm looking for? Is the MediaCodec the way to go? If so, does anyone have any working examples of how I could get the currently used MediaCodec and extract the information I'm looking for out of it? (Or at least point me in the right direction)
Really appreciate any help on this one.
Cheers
I've been working on this problem for a while now. I'm working on a project where I recieve a stream of images which have been processed by OpenCV on a socket, and I need to display these feeds in MT4J. We are transferring the images inside of a Google Protocol Buffer message. In my current (and naive) approach, I grab images off of the socket and simply set them as a the texture in an overloaded draw method from MTRectangle. While this works, if I attempt to display more than 3 feeds at a time, the frame rate drops to an unacceptable rate (<1 fps), and it takes up ~70-80% of my CPU.
A co-worker managed to use GStreamer/Xuggler to display about 20 videos in MT4J, but was using static files instead of a real-time stream. Because of this, I attempted to find a tool in MT4J or Xuggler which would allow me to convert the images to a video stream on the fly, but everything seems to simply be converting a collection of images on the disk to a stream, or vice-versa, but not in real time.
So at this point I see two possible solutions. First, is there a more efficient way to set/draw textures in MT4J? Secondly, is there some tool that exists/some library in GStreamer or Xuggler which would do the image to video conversion in real time?
MT4J website for people who don't know what it is:
http://www.mt4j.org
Thanks in advance, and let me know if there is any information which I left out.