Just started learning graphics in Android Studio and started out by making a growing graph(x^2). It turned out pretty well, but it goes out of the bitmap box quite fast and I was wondering if it is possible to start scaling it while it tries to grow outside the boundaries.
Here is a good example of what I mean. Whenever the graph line starts to exit the boundaries of the box, all the graph starts to scale.
Is that possible to do with bitmap or any other way in Android Studio? And if so, then how?
This is pretty open ended but IMO this all depends on what you're trying to do and how big everything can scale.
Typically what I would say is that your bitmap shouldn't scale up, your graph should scale down. This keeps the memory footprint of the bitmap small which will be important to run on low memory devices. IMO you should use paths and then draw them on your canvas and change the stroke to make them smaller as needed. Then they can scale up and it wont matter if it draws offscreen as it's not actually making the bitmap bigger. To learn how to do that you should check out Google's documentation!
If you want to use a bitmap then You should read the suggestions here as well. Also you'll probably have to get into tiling/region decoding in order to load everything efficiently when zooming in on the image:
Related
I am new to computer vision but I am trying to code an android app which does the following:
Get the live camera preview and try to detect one logo in that (i have the logo in my resources). In real-time. Draw a rect around the logo if found. If there is no match, dont draw the rectangle.
I already tried a couple of things including template-matching and feature detection using ORB.
Why that didnt work:
Template-matching:
Issues with scaling and rotation. I tried a multi scale variant of it but a) the performance was really bad and b) the rectangle was of course always shown trying to search for the image. There was no way to actually confirm in the code if the logo was found or not.
ORB feature detection:
Also pretty slow (5-6 fps) but it worked ok-ish. The other problem was that also i never could be sure if the logo was in the picture or not. ORB found random matches even if the logo was not in the picture.
Like I said, I am very new to this. I would appreciate the help on what would be the best way to achieve:
Confirm if a picture A (around 200x200 pixels) is in ROI of camera picture (around 600x600 pixels).
This shouldnt take longer than 50ms per frame. I dont know if thats even possible though. So if a correct way to do this would take a bit longer than that, I would just do the work in a seperate thread and only analyze like every fifth camera frame or so.
Would appreciate any hints or code examples on how to achieve that. Thank you!
With logo detection, I would highly recommend using OpenCV HaarClassifier. It is easy to generate training samples from a collection of images of the logo, or one logo image with many distortions.
If you can use a few rules like the minimum and maximum size of the logo to be detected, and possible regions on the image where it can appear, you can run the detector at a speed better than you mention with ORB.
I would like to know if there are any optimizations that could be used to improve the speed when using a large quantity of bitmaps drawn on a screen.
I use a canvas which I load all my resources at the initialization and use createBitmap when I need to update the bitmap. I use ~10-15kb files with my Galaxy Note 3 and notice a lag (xxhdpi) when I reach around 20 bitmaps which gets nearly unusable around 35+.
I am using createbitmap constantly because the bitmaps use frame animation and matrix to rotate.
So far the only thing i've tried that i've noticed a difference is inBitmap which gives about 5-10% increase in the GC_FOR_ALLOC freed.
Anyone care to chime in on a good answer for what is better? I've heard flash AIR is a good choice to go with using cacheAsBitmapMatrix, but I would like a different option (just personal pref).
EDIT:
(rectf bounds = bitmap bounds)
matrix.setRotate(rotation, rectf.centerX(), rectf.centerY());
ship1 = Bitmap.createBitmap(ship1, 0, 0, ship1.getWidth(), ship1.getHeight(), matrix, true);
I think I understand my problem, I should be calling
canvas.drawBitmap(ship1, matrix, paint);
But in my onDraw method I am using
canvas.drawBitmap(ship1, srcRectf, dstRectf, paint); //srcRectf = null
I use dstRectf to move my bitmap around, but I suppose this can be replaced with setTranslate. I'll try it out, thanks Mehmet!
Bitmap stores the pixel data in the native heap*, not in the java heap, and takes care of managing it by itself. That would mean GC shouldn't give you any serious headaches.
The problem is probably constantly using createBitmap(), which is usually a really costly operation. This will make a disk IO call at worst, or a relatively big memory allocation at best. You would like to use it as little as possible, i.e. only when initially reading them from the disk.
Instead I advise you to use a Matrix in conjunction with a Canvas. Just change your Matrix and with each step repaint your Bitmaps with it.
EDIT:
*Only correct for Android 2.3.3 <-> Android 3.0
I am developing a game on android.Like tower defense.
I am using surface view.I am using some image as bitmap.(Spritesheets, tilesets, buttons, backgrounds,efects vs.)
Now images are nearly 5-6 mb.And i get this error when i run my game:
Bitmap size exceeds VM budget
19464192-byte external allocation too large for this process.
I call images like that
BitmapFactory.decodeResource(res, id)
and i put it to array.
I can't scale images.I am using all of them.
I tried that
options.inPurgeable=true;
and it work but the image is loading very slowly.I load a spritesheet with that and when it is loading, i get very very low fps.
What can I do?
I've had this problem too; there's really no solution other than to reduce the number/size of bitmaps that you have loaded at once. Some older Android devices only allocate 16MB to the heap for your whole application, and bitmaps are stored in memory uncompressed once you load them, so it's not hard to exceed 16MB with large backgrounds, etc. (An 854x480, 32-bit bitmap is about 1.6MB uncompressed.)
In my game I was able to get around it by only loading bitmaps that I was going to use in the current level (e.g. I have a single Bitmap object for the background that gets reloaded from resources each time it changes, rather than maintaining multiple Bitmaps in memory. I just maintain an int that tracks which resource I have loaded currently.)
Your sprite sheet is huge, so I think you're right that you'll need to reduce the size of your animations. Alternatively, loading from resources is decently fast, so you might be able to get away with doing something like only loading the animation strip for the character's current direction, and have him pause slightly when he turns while you replace it with the new animation strip. That might get complicated though.
Also, I highly recommend testing your app on the emulator with a VM heap set to 16mb, to make sure you've fixed the problem for all devices. (The emulator usually defaults to 24mb, so it's easy for that to go untested and generate some 1-star reviews after release.)
I am not a game dev however I would like to think I know Android enough.
Loading images of the size is almost certain to throw errors. Why are the images that file size?
There is an example at http://p-xr.com/android-tutorial-how-to-paint-animate-loop-and-remove-a-sprite/. If you notice he has an explosion sprite of only ~200Kb. Even a more detailed image would not take much more file space.
OK some suggestions:
Are you loading all your spritesheets onto a single sheet or is
each spritesheet in a seperate file? If they are all on one I would
split them up.
Lower the resolution of the images, an Android device is portable
and some only have a low resolution screen. For example the HTC
Wildfire has a resolution of 240x320 (LDPI device) and is quite a
common device. You have not stated the image dimensions so we can't be sure if this is practical.
Finally; I am not a game programmer but I found this tutorial (part of the same series) quite enlightening - http://p-xr.com/android-tutorial-2d-canvas-graphics/. I wonder if you are applying a pattern that is not appropriate for Android, however without code I cannot say.
Right something a little off topic but worth noting...
People under estimate the power of the View. While there is a certain amount of logic to using a SurfaceView, the standard View will do quite a lot on its own. A SurfaceView more often than not requires an underlying thread to run (that you will have to setup yourself) in order to make it work. A View however calls onDraw(), which can be utilized in a variety of ways including the postinvalidate() method (see What does postInvalidate() do?).
In any case it might be worth checking out this tutorial http://mindtherobot.com/blog/272/android-custom-ui-making-a-vintage-thermometer/. Personally, it was an excellent example of a custom View and what you can do with them. I rewrote a few sections and made a pocket watch app.
I'm making an Android live wallpaper using OpenGL.
I would like to let the user pick an image to use for the background. Can someone give me any advice on an easy way to do this?
I know that I need to down sample bitmaps before loading them to avoid using huge amounts of memory for large bitmaps.
I also know OpenGL only supports texture sizes that are powers of two. It looks like messy work having to calculate the down scale factor then calculate the closest good texture size and then scale a quad with that texture appropriately to the screen.
It would also be nice to have the wallpaper panning feature when homescreens change.
Is there any easier way than doing all of the above myself? If not, is there sample code available for what appears to be a very common task?
I'm writing a game in Java, LJGWL (OpenGL). I'm using a library that handles a lot of messy details for me, but need to find a lot faster way to do this.
Basically I want to set every pixel on the screen to say a random color as fast a possible. The "random colors" is just an Array [][] that gets updated every 2-3 seconds. I've tried drawing rects and using images, both are pretty slow for what I want to do.
I think I want to learn how to write a GPU shader? That is the fastest way to do this? LJGWL exposes OpenGL api to java. Any basic tutorials on how to get started with OpenGL shaders? Or should I dynamically create a texture of some sort and then just throw up the entire texture, would that be faster?
If it were the case that you were statically displaying the same image, than using a texture or display list would suffice. But as you want to frequently update it, shaders really are the best option. Shader code executes on the GPU and modifies data in GRAM, so you have no bottle neck transferring from CPU to GPU. The next best thing would probably be a Pixel or Frame Buffer Object. Buffer Objects let you read/write to GRAM via DMA (without having to go through the CPU) so they can be pretty fast.
I haven't written any shaders yet, so I can't recommend any good resources. But SongHo's OpenGL pages are a good place to learn about Buffer Objects. (His examples are in C++ though)
Textures are the fastest way to draw something on screen, draw a texture mapped quad into the screen, it should be fast enough. When you need to reupload the texture data, use glTexSubimage2D to update it.
No need to use shaders.
I've yet to do any work with shaders in OpenGL, but given the same scenario in multiple occasions, I handled it with a texture I threw up across the screen on top, and it worked quite effectively.
I don't know how you are drawing your pixels exactly, but this limit you hit could be because of the amount of data you transfer (inefficiently?). Updating a screen full of pixels every 2-3 seconds shouldn't be hard at all. Although shaders bring you closer to the graphics card, they will never make inefficient methods fast, so...
Why is your code so slow?
What code? What code exactly did you try? What texture did you use, render to, ...?
Is it slow? How slow? How fast do you expect it to be?
How quickly can one get 1920x1080(?) pixels in video ram, what's your hardware, drivers, OS?
I think you need to edit/repost before we can help you solve your problem. Just because it is slow, is no guarantee at all that shaders will even be one bit faster.