I want to be able to take a snapshot of a preview camera screen being displayed on a SurfaceView. I know it has something to do with the onPreviewFrame() method and this returns a byte array in YUV format but I have no idea how to call the onPreviewFrame() method on it's own.
I have used it in conjunction with Camera.setPreviewCallback() but this is a continuous thing that keeps generating the image, I need it to do it once on an onClick(), to pretty much take a photo without making the preview window stop. Any ideas?
For anybody else with a similar problem to this I solved it by using the setOneShotPreviewCallback() method on the camera object to give me a byte[] with the information for the image. with that its can be used to create a YUV and then compressed to bitmap or whatever you need.
Capture the preview image into a canvas and hold a lock to it. You can then easily save to a BitMap
Refer this post for complete explanation with sample code
http://www.anddev.org/using_camerapreview_to_save_a_picture_on_disk-t226.html
Related
I want to take a raw image through the Camera2 API. There are plenty of guides for it, but none are done without a preview streamed to the screen.
I want to know if it is possible to code such that I can take raw images without a preview, perhaps a class I can instantiate and just call a method and get a Image or byte[].
How take picture using project tango ?
I read this answer: Using the onFrameAvailable() in Jacobi Google Tango API
which works for grabbing a frame but picture quality is not great. Is there any takePicture equivalent ?
Note that java API
public void onFrameAvailable(int cameraId) {
if (cameraId == TangoCameraIntrinsics.TANGO_CAMERA_COLOR) {
mTangoCameraPreview.onFrameAvailable();
}
}
does not provide rgb data. If I use android camera to take picture, tango can not sense depth. There I will have to use TangoCameraPreview.
Thanks
You don't have to use TangoCameraPreview to get frames in Java. It is really just a convenience class provided to help with getting video on the screen. It appears to be implemented entirely in Java with calls to com.google.atap.tangoservice.Tango (i.e no calls to unpublished APIs). In fact, if you look inside the Tango SDK jar file, you can see that someone accidentally included a version of the source file - it has some diff annotations and may not be up to date but examining it is still instructive.
I prefer not to use TangoCameraPreview and instead call Tango.connectTextureId() and Tango.updateTexture() myself to load frame pixels into an OpenGL texture that I can then use however I want. That is exactly what TangoCameraPreview does under the hood.
The best way to capture a frame in pure Java is to draw the texture at its exact size (1280x720) to an offscreen buffer and read it back. This also has the side effect of converting the texture from whatever YUV format it has into RGB (which may or may not be desirable). In OpenGL ES you do this using a framebuffer and renderbuffer.
Adding the framebuffer/renderbuffer stuff to a program that can already render to the screen isn't a lot of code - about on par with the amount needed to save a file - but it is tricky to get right when you do it for the first time. I created an Android Studio sample capture app that saves a Tango texture as a PNG to the pictures folder (when you tap the screen) in case that is helpful for anyone.
I am working on an application where i need to load the server images in list. But the way of displaying images is bit different. I need to display image as it gets downloaded i.e. pixel by pixel. Initially the image sets as blur image than as it gets downloaded, its gets more sharper and results into the original image. If you want to see what i am talking about than you can refer to the below link:
Progressive Image Rendering
In this you can see the demo of the superior PNG style two-dimensional interlaced rendering(the second one). I goggled a lot but did not find any relevant answer to my query.
Any sort of help would be appreciable.
Thanks in advance.
Try Facebook's Freso project: http://frescolib.org/#streaming
If you use Glide, the library recommended by Google for loading images in Android, you can achieve this kind of behavior with .thumbnail this receives the fraction of the resolution you'll like your image to show while the full resolution finishes loading.
Glide.with( context )
.load( "image url" )
.thumbnail( 0.1f ) //rate of the full resoultion to show while loading
into( imageView ); //ImageView where you want to display the image
Here is a more extensive explanation of the library and the thumbnails:
https://futurestud.io/blog/glide-thumbnails
I've the known problem that my bitmap/canvas is too big and it throws the java.lang.OutOfMemoryError.
My question is what would be the best for my needs.
The Canvas should draw a graph (with given points) and can be very wide (like 3000px and more, theoretically it's wide could be much more, like 20000px). The height is fix.
Because thats to wide for any screen I put it in a Scrollview and draw the whole graph into the canvas.
So thats to wide for the bitmap and I get the error.
The second possibility would be a fix sized canvas where I'd write a "onScroll" method that redraws the graph depending on the users swipe. So it'd only draw a part of the graph.
Would that be the better way or is there a way to make the first option work?
Anyhow please give me some hints and example code for the solution.
Here is the code:
Bitmap bitmap = Bitmap.createBitmap(speedCanvasWidth,speedCanvasHeight,Bitmap.Config.RGB_565); //I also tried ARGB_8888
speedCanvas = new Canvas(bitmap);
graph.setImageBitmap(bitmap);
Thanks in advance
You can handle this with a BitmapRegionDecoder. Just create an instance of one that points to your image. The system will maintain a handle on the image and then you can call decode on the decoder based on what rectangle you want to be displayed within the canvas. Updates to the canvas will have to be handled based on your needs. This will help prevent loading this large image that you have to handle.
You can further get details of the Bitmap in question by checking the Bitmap information. This can be done by loading the bitmap into memory with BitmapFactory.Options flags set for to true for inJustDecodeBounds. That keeps the Bitmap from actually being loaded into memory during the checks.
For instance, a quick retrieval could be done with the following:
BitmapRegionDecoder decoder = BitmapRegionDecoder.newInstance("pathToFile", true);
Bitmap regionOfInterestBitmap = decoder.decodeRegion(rectWithinImage, null);//Or with options you have decided to load.
Currently I am developing an application for decoding barcodes using mobile phones.
I have a problem with how to draw a line or a square on the camera screen to easily capture the barcode.
What is the easiest way of doing it?
Unfortunately this isn't as easy as it sounds. If you have a preview image from a phone's camera then it's often rendered as an overlay. This means that the camera preview image doesn't actually form any part of your application's canvas and you can't interact directly with the pixels. The phone simply draws the preview on top of your appliction, completely out of your control.
If you draw a line on your screen, then it will be drawn underneath the preview image.
The way around this isn't too pretty. You need to actually capture an image from the camera. Unfortunately this means capturing a JPEG or a PNG file into a byte buffer. You then load this image using Image.createImage and render that to the screen. You can then safely draw on top of that.
This also has the undesirable downside of giving you an appalling frame-rate. You might want to enumerate all the possible file formats you can capture in and try them all to see which one is quickest.
You can do this by using OverlayControl, assumming that your target devices support it.
I think i remember seeing a good example # Sony Ericsson developer forums.
Edit: found this (does not involve use of OverlayControl)