I've been experimenting with ARCore for the past few months. I have read almost all the documentation. Talking in reference to the sample app, what I want to do is to extract the superimposed image from the app i.e a frame containing the camera texture and also the bots drawn by opengl (like a screenshot). In preview 2, they have provided TextureReader class which extracts just the camera texture. I've been trying a lot but haven't been able to succeed in getting the superimposed image. Is there a way to do it or is it just impossible?
Sample code specifically for the HelloAR sample to capture the image (and save it to the device) is in this answer: How to take picture with camera using ARCore
I think basically you want to have a screenshot from the OpenGL view. This question should help you: Screenshot on android OpenGL ES application
Related
I am new to computer vision but I am trying to code an android app which does the following:
Get the live camera preview and try to detect one logo in that (i have the logo in my resources). In real-time. Draw a rect around the logo if found. If there is no match, dont draw the rectangle.
I already tried a couple of things including template-matching and feature detection using ORB.
Why that didnt work:
Template-matching:
Issues with scaling and rotation. I tried a multi scale variant of it but a) the performance was really bad and b) the rectangle was of course always shown trying to search for the image. There was no way to actually confirm in the code if the logo was found or not.
ORB feature detection:
Also pretty slow (5-6 fps) but it worked ok-ish. The other problem was that also i never could be sure if the logo was in the picture or not. ORB found random matches even if the logo was not in the picture.
Like I said, I am very new to this. I would appreciate the help on what would be the best way to achieve:
Confirm if a picture A (around 200x200 pixels) is in ROI of camera picture (around 600x600 pixels).
This shouldnt take longer than 50ms per frame. I dont know if thats even possible though. So if a correct way to do this would take a bit longer than that, I would just do the work in a seperate thread and only analyze like every fifth camera frame or so.
Would appreciate any hints or code examples on how to achieve that. Thank you!
With logo detection, I would highly recommend using OpenCV HaarClassifier. It is easy to generate training samples from a collection of images of the logo, or one logo image with many distortions.
If you can use a few rules like the minimum and maximum size of the logo to be detected, and possible regions on the image where it can appear, you can run the detector at a speed better than you mention with ORB.
How take picture using project tango ?
I read this answer: Using the onFrameAvailable() in Jacobi Google Tango API
which works for grabbing a frame but picture quality is not great. Is there any takePicture equivalent ?
Note that java API
public void onFrameAvailable(int cameraId) {
if (cameraId == TangoCameraIntrinsics.TANGO_CAMERA_COLOR) {
mTangoCameraPreview.onFrameAvailable();
}
}
does not provide rgb data. If I use android camera to take picture, tango can not sense depth. There I will have to use TangoCameraPreview.
Thanks
You don't have to use TangoCameraPreview to get frames in Java. It is really just a convenience class provided to help with getting video on the screen. It appears to be implemented entirely in Java with calls to com.google.atap.tangoservice.Tango (i.e no calls to unpublished APIs). In fact, if you look inside the Tango SDK jar file, you can see that someone accidentally included a version of the source file - it has some diff annotations and may not be up to date but examining it is still instructive.
I prefer not to use TangoCameraPreview and instead call Tango.connectTextureId() and Tango.updateTexture() myself to load frame pixels into an OpenGL texture that I can then use however I want. That is exactly what TangoCameraPreview does under the hood.
The best way to capture a frame in pure Java is to draw the texture at its exact size (1280x720) to an offscreen buffer and read it back. This also has the side effect of converting the texture from whatever YUV format it has into RGB (which may or may not be desirable). In OpenGL ES you do this using a framebuffer and renderbuffer.
Adding the framebuffer/renderbuffer stuff to a program that can already render to the screen isn't a lot of code - about on par with the amount needed to save a file - but it is tricky to get right when you do it for the first time. I created an Android Studio sample capture app that saves a Tango texture as a PNG to the pictures folder (when you tap the screen) in case that is helpful for anyone.
I am working on an application where i need to load the server images in list. But the way of displaying images is bit different. I need to display image as it gets downloaded i.e. pixel by pixel. Initially the image sets as blur image than as it gets downloaded, its gets more sharper and results into the original image. If you want to see what i am talking about than you can refer to the below link:
Progressive Image Rendering
In this you can see the demo of the superior PNG style two-dimensional interlaced rendering(the second one). I goggled a lot but did not find any relevant answer to my query.
Any sort of help would be appreciable.
Thanks in advance.
Try Facebook's Freso project: http://frescolib.org/#streaming
If you use Glide, the library recommended by Google for loading images in Android, you can achieve this kind of behavior with .thumbnail this receives the fraction of the resolution you'll like your image to show while the full resolution finishes loading.
Glide.with( context )
.load( "image url" )
.thumbnail( 0.1f ) //rate of the full resoultion to show while loading
into( imageView ); //ImageView where you want to display the image
Here is a more extensive explanation of the library and the thumbnails:
https://futurestud.io/blog/glide-thumbnails
Best reader,
I'm stuck on one of my concepts.
I'm making a program which classroom children can measure themselves with.
This is what the program includes;
- 1 webcam (only used for a simple webcam view.)
- 2 phidgets (don't mind these.)
So, this was my plan. I'll draw a rectangle on the webcamview and make it repaint itself constantly.
When the repainting is stopped by one of the phidgets, the rectangle's value will be returned in centimeters or meters.
I've already written the code of the rectangle that's repainting itself and this was my result:
(It's a roundRectangle, the lines are kind of hard to see in this image, sorry about that.)
As you can see, the background is now simply black.
I want to set the background of this JFrame as a webcam view (if possible) and then draw the
rectangle over the webcam view instead of the black background.
I've already looked into jmf, fmj and such but am getting errors even after checking my webcam path and adding the needed jar libraries. So I want to try other options.
So;
- I simply want to open my webcam, use it as background (yes live stream, if possible in some way).
And then draw this rectangle over it.
I'm thus wondering if this is possible, or if there's other options for me to achieve this.
Hope you understand my situation, and please ask if anything's unclear.
EDIT:
I got my camera to open now trough java. The running camera is of type "Process".
This is where I got the code for my camera to open: http://www.linglom.com/2007/06/06/how-to-run-command-line-or-execute-external-application-from-java/
I adjusted mine a little so it'll open my camera instead.
But now I'm wondering; is it possible to set a process as background of a JFrame?
Or can I somehow add the process to a JPanel and then add it to a JFrame?
I've tried several things without any succes.
My program as it is now, when I run it, opens the measuring frame and the camera view seperatly.
But the goal is to fuse them and make the repainting-rectangle paint over the camera view.
Help much appreciated!
I don't think it's a matter of setting a webcam stream as the background for your interface. More likely, you need to create a media player component, add it to your GUI, and then overlay your rectangles on top of that component.
As you probably know from searching for Java webcam solutions in Stack Overflow already, it's not easy, but hopefully the JMF Specs and API Guide will help you through it. The API guide is a PDF and has sections on receiving media streams, as well as sample code.
Currently I am developing an application for decoding barcodes using mobile phones.
I have a problem with how to draw a line or a square on the camera screen to easily capture the barcode.
What is the easiest way of doing it?
Unfortunately this isn't as easy as it sounds. If you have a preview image from a phone's camera then it's often rendered as an overlay. This means that the camera preview image doesn't actually form any part of your application's canvas and you can't interact directly with the pixels. The phone simply draws the preview on top of your appliction, completely out of your control.
If you draw a line on your screen, then it will be drawn underneath the preview image.
The way around this isn't too pretty. You need to actually capture an image from the camera. Unfortunately this means capturing a JPEG or a PNG file into a byte buffer. You then load this image using Image.createImage and render that to the screen. You can then safely draw on top of that.
This also has the undesirable downside of giving you an appalling frame-rate. You might want to enumerate all the possible file formats you can capture in and try them all to see which one is quickest.
You can do this by using OverlayControl, assumming that your target devices support it.
I think i remember seeing a good example # Sony Ericsson developer forums.
Edit: found this (does not involve use of OverlayControl)