I have an application in JavaME that can display the feed from viewfinder using the VideoControl
Item videoItem = (Item)vidc.initDisplayMode(VideoControl.USE_GUI_PRIMITIVE, null);
and take a snapshot using the appropriate method. However, I don't wish to capture the whole photo, but just the thumbnail from the viewfinder instead. The data is feeded to the device's display, so they are there somewhere. But can I get the raw data that can be seen in the videoItem instead of calling the getSnapshot method, that already introduces some encoding, needs permissions and takes a lot of time?
Thanks in advance.
I'm afraid there's no way to do this. The viewfinder's image isn't available to you except through getSnapshot(), which as you said is not instant due to encoding and permissions.
The fact that the viewfinder is being fed directly to the device's display means it can be implemented natively far more quickly than passing the encoded bytes to Java.
If you specifically need a thumbnail size image, you'd need to perform manual resizing of the image returned by getSnapshot().
Related
I want to play video in in photo frame when image is detected, anybody who have done this using ARCore? would be great help
Thanks
I think you mean you want to add a video as a renderable in ARCore, in your case when an image is detected.
There is actually (at the time of writing) an example included with Sceneform showing how to add a video as a renderable - it is available here: https://github.com/google-ar/sceneform-android-sdk/tree/master/samples/chromakeyvideo
This particular example also applies a Chroma filter but you can simply ignore that part.
The approach is roughly:
create an ExternalTexture to play the video on
create a MediaPlayer and set its surface to the ExternalTexture's surface
build a new renderable with the ExternalTexture
create a node and add it to your scene
set the renderable for the node to the the new ModelRenderable you built
For Augmented images, ArCore will automatically calculate the size of the image that it detects so long as the state of the image is 'TRACKING". From the documentation:
ARCore will attempt to estimate the physical image's width based on its understanding of the world. If the optional physical size is specified in the database, this estimation process will happen more quickly. However, the estimated size may be different from the specified size.
Your renderable will be sized to fit inside this by default but you can scale the renderable up or down as you want also.
There is a series of articles available which may cover your exact case, depending on exactly what you need, along with some example code here: https://proandroiddev.com/arcore-sceneform-simple-video-playback-3fe2f909bfbc
External texture is not working now a days create video node to show videos.
The link to refer is https://github.com/SceneView/sceneform-android/blob/master/samples/video-texture/src/main/java/com/google/ar/sceneform/samples/videotexture/MainActivity.java
I want to take a raw image through the Camera2 API. There are plenty of guides for it, but none are done without a preview streamed to the screen.
I want to know if it is possible to code such that I can take raw images without a preview, perhaps a class I can instantiate and just call a method and get a Image or byte[].
Im creating a simple application that converts the image dimensions to a size that the user wants, im laying it out over a 3 screens
the first screen allows a user to select an image and it is displayed on screen.
the second screen then displays the attributes of the file (path, name, height and width) with the option of adjusting the name, height and width.
the third screen displays the resized image with an option to save the new image.
at the moment im only passing the image url between the classes and decoding the bitmap within each class eg
Bitmap bmap = BitmapFactory.decodeImage(image_URL);
my question is it better to pass the URL between the classes or pass the Bitmap?
many thanks
You have several options:
Share the image through static data - Preferably in your
Application class. Example: Using a class to store static data in Java?
Pass the image through the Intent data which is possible, but
highly discouraged.
Pass the URL as you suggest and re-open and resize it each time.
Use a combination. For example, you should pass only image
information to your second screen which displays said information.
I suggest that the first bullet is best practice and will work well for you.
That should be the correct approach. Passing Bitmap shouldn't be done from one activity to another. Passing the path is always recommended
I think that dependent on where is your image if (you have the image on device ) I think the best is to pass Image path , but if you have the image on web and you have to download it I think passing bitmap is better than pass URL(Just to decries internet connection & faster for the user )
Is it possible to create a touch draw area to take signatures in a form?
I am working on an application to collect information though a form and would like to collect signatures by letting users of the app draw a signature on the screen. How would I do this for android. Preferably I would like the "signature" stored as a image to save somewhere.
Does anyone have any idea?
Many thanks,
Check out Google's API example of Fingerpaint -- the full app is included in the samples -- but the code to the main Java file is here.
Basically, make some small section of the view a canvas control (like in finger paint) and then when the user presses done, save the Canvas as an image (using the Bitmmap object attached to the Canvas.)
You may want to turn rotation off for this particular view, otherwise the signature will get cleared when you rotate the image -- unless you implement some way of rotating it.
I want to be able to take a snapshot of a preview camera screen being displayed on a SurfaceView. I know it has something to do with the onPreviewFrame() method and this returns a byte array in YUV format but I have no idea how to call the onPreviewFrame() method on it's own.
I have used it in conjunction with Camera.setPreviewCallback() but this is a continuous thing that keeps generating the image, I need it to do it once on an onClick(), to pretty much take a photo without making the preview window stop. Any ideas?
For anybody else with a similar problem to this I solved it by using the setOneShotPreviewCallback() method on the camera object to give me a byte[] with the information for the image. with that its can be used to create a YUV and then compressed to bitmap or whatever you need.
Capture the preview image into a canvas and hold a lock to it. You can then easily save to a BitMap
Refer this post for complete explanation with sample code
http://www.anddev.org/using_camerapreview_to_save_a_picture_on_disk-t226.html