I am making an android app which is using the Google face API to detect faces of all the images in the gallery. It is taking a long time to process all the images and hence the apps get stuck for a long time. Any workaround?
I tried reducing the size of the image and then process, but it gives a faulty answer on it.
If you look in the documentation of the FaceDetector.Builder you will see that you can set some properties that will increase the speed.
E.g.:
public FaceDetector.Builder setProminentFaceOnly (boolean prominentFaceOnly)
Disable the image tracking :
FaceDetector detector = new FaceDetector.Builder(context)
.setTrackingEnabled(false)
.build();
It's true by default, and may slow the detection if you don't need this feature.
2 minutes for 715 images is a really good time.
Steps that can be taken:
enable fast mode in FaceDetector
set setTrackingEnabled to false if you don't want to track
set minimum face size to an appropriate size according to your dataset
Load the bitmaps using Universal Image Loader or glide library of Android. I used the UIL library.
640x480 is an optimum size for face detection and classification for scale down the size for less time and almost the same result.
Set setLandmarkType and setClassificationType according to your needs and disable if not required.
Related
I want to play video in in photo frame when image is detected, anybody who have done this using ARCore? would be great help
Thanks
I think you mean you want to add a video as a renderable in ARCore, in your case when an image is detected.
There is actually (at the time of writing) an example included with Sceneform showing how to add a video as a renderable - it is available here: https://github.com/google-ar/sceneform-android-sdk/tree/master/samples/chromakeyvideo
This particular example also applies a Chroma filter but you can simply ignore that part.
The approach is roughly:
create an ExternalTexture to play the video on
create a MediaPlayer and set its surface to the ExternalTexture's surface
build a new renderable with the ExternalTexture
create a node and add it to your scene
set the renderable for the node to the the new ModelRenderable you built
For Augmented images, ArCore will automatically calculate the size of the image that it detects so long as the state of the image is 'TRACKING". From the documentation:
ARCore will attempt to estimate the physical image's width based on its understanding of the world. If the optional physical size is specified in the database, this estimation process will happen more quickly. However, the estimated size may be different from the specified size.
Your renderable will be sized to fit inside this by default but you can scale the renderable up or down as you want also.
There is a series of articles available which may cover your exact case, depending on exactly what you need, along with some example code here: https://proandroiddev.com/arcore-sceneform-simple-video-playback-3fe2f909bfbc
External texture is not working now a days create video node to show videos.
The link to refer is https://github.com/SceneView/sceneform-android/blob/master/samples/video-texture/src/main/java/com/google/ar/sceneform/samples/videotexture/MainActivity.java
I am new to computer vision but I am trying to code an android app which does the following:
Get the live camera preview and try to detect one logo in that (i have the logo in my resources). In real-time. Draw a rect around the logo if found. If there is no match, dont draw the rectangle.
I already tried a couple of things including template-matching and feature detection using ORB.
Why that didnt work:
Template-matching:
Issues with scaling and rotation. I tried a multi scale variant of it but a) the performance was really bad and b) the rectangle was of course always shown trying to search for the image. There was no way to actually confirm in the code if the logo was found or not.
ORB feature detection:
Also pretty slow (5-6 fps) but it worked ok-ish. The other problem was that also i never could be sure if the logo was in the picture or not. ORB found random matches even if the logo was not in the picture.
Like I said, I am very new to this. I would appreciate the help on what would be the best way to achieve:
Confirm if a picture A (around 200x200 pixels) is in ROI of camera picture (around 600x600 pixels).
This shouldnt take longer than 50ms per frame. I dont know if thats even possible though. So if a correct way to do this would take a bit longer than that, I would just do the work in a seperate thread and only analyze like every fifth camera frame or so.
Would appreciate any hints or code examples on how to achieve that. Thank you!
With logo detection, I would highly recommend using OpenCV HaarClassifier. It is easy to generate training samples from a collection of images of the logo, or one logo image with many distortions.
If you can use a few rules like the minimum and maximum size of the logo to be detected, and possible regions on the image where it can appear, you can run the detector at a speed better than you mention with ORB.
Just started learning graphics in Android Studio and started out by making a growing graph(x^2). It turned out pretty well, but it goes out of the bitmap box quite fast and I was wondering if it is possible to start scaling it while it tries to grow outside the boundaries.
Here is a good example of what I mean. Whenever the graph line starts to exit the boundaries of the box, all the graph starts to scale.
Is that possible to do with bitmap or any other way in Android Studio? And if so, then how?
This is pretty open ended but IMO this all depends on what you're trying to do and how big everything can scale.
Typically what I would say is that your bitmap shouldn't scale up, your graph should scale down. This keeps the memory footprint of the bitmap small which will be important to run on low memory devices. IMO you should use paths and then draw them on your canvas and change the stroke to make them smaller as needed. Then they can scale up and it wont matter if it draws offscreen as it's not actually making the bitmap bigger. To learn how to do that you should check out Google's documentation!
If you want to use a bitmap then You should read the suggestions here as well. Also you'll probably have to get into tiling/region decoding in order to load everything efficiently when zooming in on the image:
I am developing a game on android.Like tower defense.
I am using surface view.I am using some image as bitmap.(Spritesheets, tilesets, buttons, backgrounds,efects vs.)
Now images are nearly 5-6 mb.And i get this error when i run my game:
Bitmap size exceeds VM budget
19464192-byte external allocation too large for this process.
I call images like that
BitmapFactory.decodeResource(res, id)
and i put it to array.
I can't scale images.I am using all of them.
I tried that
options.inPurgeable=true;
and it work but the image is loading very slowly.I load a spritesheet with that and when it is loading, i get very very low fps.
What can I do?
I've had this problem too; there's really no solution other than to reduce the number/size of bitmaps that you have loaded at once. Some older Android devices only allocate 16MB to the heap for your whole application, and bitmaps are stored in memory uncompressed once you load them, so it's not hard to exceed 16MB with large backgrounds, etc. (An 854x480, 32-bit bitmap is about 1.6MB uncompressed.)
In my game I was able to get around it by only loading bitmaps that I was going to use in the current level (e.g. I have a single Bitmap object for the background that gets reloaded from resources each time it changes, rather than maintaining multiple Bitmaps in memory. I just maintain an int that tracks which resource I have loaded currently.)
Your sprite sheet is huge, so I think you're right that you'll need to reduce the size of your animations. Alternatively, loading from resources is decently fast, so you might be able to get away with doing something like only loading the animation strip for the character's current direction, and have him pause slightly when he turns while you replace it with the new animation strip. That might get complicated though.
Also, I highly recommend testing your app on the emulator with a VM heap set to 16mb, to make sure you've fixed the problem for all devices. (The emulator usually defaults to 24mb, so it's easy for that to go untested and generate some 1-star reviews after release.)
I am not a game dev however I would like to think I know Android enough.
Loading images of the size is almost certain to throw errors. Why are the images that file size?
There is an example at http://p-xr.com/android-tutorial-how-to-paint-animate-loop-and-remove-a-sprite/. If you notice he has an explosion sprite of only ~200Kb. Even a more detailed image would not take much more file space.
OK some suggestions:
Are you loading all your spritesheets onto a single sheet or is
each spritesheet in a seperate file? If they are all on one I would
split them up.
Lower the resolution of the images, an Android device is portable
and some only have a low resolution screen. For example the HTC
Wildfire has a resolution of 240x320 (LDPI device) and is quite a
common device. You have not stated the image dimensions so we can't be sure if this is practical.
Finally; I am not a game programmer but I found this tutorial (part of the same series) quite enlightening - http://p-xr.com/android-tutorial-2d-canvas-graphics/. I wonder if you are applying a pattern that is not appropriate for Android, however without code I cannot say.
Right something a little off topic but worth noting...
People under estimate the power of the View. While there is a certain amount of logic to using a SurfaceView, the standard View will do quite a lot on its own. A SurfaceView more often than not requires an underlying thread to run (that you will have to setup yourself) in order to make it work. A View however calls onDraw(), which can be utilized in a variety of ways including the postinvalidate() method (see What does postInvalidate() do?).
In any case it might be worth checking out this tutorial http://mindtherobot.com/blog/272/android-custom-ui-making-a-vintage-thermometer/. Personally, it was an excellent example of a custom View and what you can do with them. I rewrote a few sections and made a pocket watch app.
I'm trying to do an image capture on a high end Nokia phone (N95). The phone's internal camera is very good (4 megapixels) but in j2me I only seem to be able to get a maximum of 1360x1020 image out. I drew largely from this example http://developers.sun.com/mobility/midp/articles/picture/
What I did was start with 640x480 and increase the width and height by 80 and 60, respectively until it failed. The line of code is:
jpg = mVideoControl.getSnapshot("encoding=jpeg&quality=100&width=" + width + "&height=" + height);
So the two issues are:
1. The phone throws an exception when getting an image larger than 1360x1020.
2. The higher resolution images appear to be just smoothed versions of the smaller ones. E.g. When I take a 640x480 image and increase it in photoshop I can't tell the difference between this and one that's supposedly 1360x1020.
Is this a limitation of j2me on the phone? If so does anyone know of a way to get a higher resolution from within a j2me application and/or how to access the native camera from within another application?
This explanation on Nokia forum may help you.
It says that "The maximum image size that can be captured depends on selected image format, encoding options and free heap memory available."
and
"It is thus strongly adviced that at least larger images (larger than 1mpix) are captured as JPEG images and in a common image size (e.g. 1600x1200 for 2mpix an so on). Supported common image sizes are dependent on product and platform version."
So I suggest you to take some tries
1. with 1600x1200, 1024x768 and whatever image resolution your N95 guide mentions
2. with BMP and PNG as well.
Anyway, based on my earlier experiences (that could be outdated), j2me implementations are full of bugs, so there may not be a working solution to your problem.
Your cameras resolution is natively:
2582 x 1944 . Try capturing there to see how that goes.
This place:
http://developers.sun.com/mobility/midp/articles/picture/index.html
Mentions the use of:
byte[] raw = mVideoControl.getSnapshot(null);
Image image = Image.createImage(raw, 0, raw.length);
The use of raw seems interesting, to get the native resolution.
The 'quality' of a JPEG (As interpreted by the code) is nothing to do with the resolution. Rather it is to do with how compressed the image is. A 640x480 image at 100 quality will be noticably better looking than a 640x480 image at 50, but will use more storage space.
Try this instead:
jpg = mVideoControl.getSnapshot("encoding=jpeg&quality=100&width=2048&height=1536");