I want to take a raw image through the Camera2 API. There are plenty of guides for it, but none are done without a preview streamed to the screen.
I want to know if it is possible to code such that I can take raw images without a preview, perhaps a class I can instantiate and just call a method and get a Image or byte[].
Related
How take picture using project tango ?
I read this answer: Using the onFrameAvailable() in Jacobi Google Tango API
which works for grabbing a frame but picture quality is not great. Is there any takePicture equivalent ?
Note that java API
public void onFrameAvailable(int cameraId) {
if (cameraId == TangoCameraIntrinsics.TANGO_CAMERA_COLOR) {
mTangoCameraPreview.onFrameAvailable();
}
}
does not provide rgb data. If I use android camera to take picture, tango can not sense depth. There I will have to use TangoCameraPreview.
Thanks
You don't have to use TangoCameraPreview to get frames in Java. It is really just a convenience class provided to help with getting video on the screen. It appears to be implemented entirely in Java with calls to com.google.atap.tangoservice.Tango (i.e no calls to unpublished APIs). In fact, if you look inside the Tango SDK jar file, you can see that someone accidentally included a version of the source file - it has some diff annotations and may not be up to date but examining it is still instructive.
I prefer not to use TangoCameraPreview and instead call Tango.connectTextureId() and Tango.updateTexture() myself to load frame pixels into an OpenGL texture that I can then use however I want. That is exactly what TangoCameraPreview does under the hood.
The best way to capture a frame in pure Java is to draw the texture at its exact size (1280x720) to an offscreen buffer and read it back. This also has the side effect of converting the texture from whatever YUV format it has into RGB (which may or may not be desirable). In OpenGL ES you do this using a framebuffer and renderbuffer.
Adding the framebuffer/renderbuffer stuff to a program that can already render to the screen isn't a lot of code - about on par with the amount needed to save a file - but it is tricky to get right when you do it for the first time. I created an Android Studio sample capture app that saves a Tango texture as a PNG to the pictures folder (when you tap the screen) in case that is helpful for anyone.
I would like to load video from file, make some transformation on it and render it back into file. Said transformation is mainly two videos overlapping and shifting one of them in time. Grafika has some examples relevant to this issue. RecordFBOActivity.java contains some code for rendering video file from surface. I'm having trouble changing two things:
instead of rendering primitives in motion I need to render previously decoded and transformed video
I would like to render surface to file as fast as posible, not along with playback
My only success so far was to load .mp4 file and add some basic seeking features to PlayMovieActivity.java. In my reasearch I came across these examples, which are also using generated video. I didn't found them quite useful, because I couldn't swap this generated video with decoded one from file.
Is it posible to modify code of RecordFBOActivity.java so it can display video from file instead of generated animation?
You can try INDE Media for Mobile, tutorials are here: https://software.intel.com/en-us/articles/intel-inde-media-pack-for-android-tutorials
Sample code showing how to enable editing or make transformation is on github: https://github.com/INDExOS/media-for-mobile
It has transcoding\remuxing functionality in MediaComposer class and a possibility to edit or transform frames. Since it uses MediaCodec API inside encoding is done on GPU so is very battery friendly and works as fast as possible.
Is it possible to create a touch draw area to take signatures in a form?
I am working on an application to collect information though a form and would like to collect signatures by letting users of the app draw a signature on the screen. How would I do this for android. Preferably I would like the "signature" stored as a image to save somewhere.
Does anyone have any idea?
Many thanks,
Check out Google's API example of Fingerpaint -- the full app is included in the samples -- but the code to the main Java file is here.
Basically, make some small section of the view a canvas control (like in finger paint) and then when the user presses done, save the Canvas as an image (using the Bitmmap object attached to the Canvas.)
You may want to turn rotation off for this particular view, otherwise the signature will get cleared when you rotate the image -- unless you implement some way of rotating it.
I want to be able to take a snapshot of a preview camera screen being displayed on a SurfaceView. I know it has something to do with the onPreviewFrame() method and this returns a byte array in YUV format but I have no idea how to call the onPreviewFrame() method on it's own.
I have used it in conjunction with Camera.setPreviewCallback() but this is a continuous thing that keeps generating the image, I need it to do it once on an onClick(), to pretty much take a photo without making the preview window stop. Any ideas?
For anybody else with a similar problem to this I solved it by using the setOneShotPreviewCallback() method on the camera object to give me a byte[] with the information for the image. with that its can be used to create a YUV and then compressed to bitmap or whatever you need.
Capture the preview image into a canvas and hold a lock to it. You can then easily save to a BitMap
Refer this post for complete explanation with sample code
http://www.anddev.org/using_camerapreview_to_save_a_picture_on_disk-t226.html
I have an application in JavaME that can display the feed from viewfinder using the VideoControl
Item videoItem = (Item)vidc.initDisplayMode(VideoControl.USE_GUI_PRIMITIVE, null);
and take a snapshot using the appropriate method. However, I don't wish to capture the whole photo, but just the thumbnail from the viewfinder instead. The data is feeded to the device's display, so they are there somewhere. But can I get the raw data that can be seen in the videoItem instead of calling the getSnapshot method, that already introduces some encoding, needs permissions and takes a lot of time?
Thanks in advance.
I'm afraid there's no way to do this. The viewfinder's image isn't available to you except through getSnapshot(), which as you said is not instant due to encoding and permissions.
The fact that the viewfinder is being fed directly to the device's display means it can be implemented natively far more quickly than passing the encoded bytes to Java.
If you specifically need a thumbnail size image, you'd need to perform manual resizing of the image returned by getSnapshot().