I'm trying to create a gauge/meter/dial animation as part of an application I'm trying to build.
What I'm trying to do is to display the acceleration values that I get via the the sensors on a gauge. Now I'm able to successfully get the values from the sensors but I don't know how to display them as an animated gauge/dial.
I've looked at the frame-by-frame and tweened animations, but they don't suit my needs, because I can't seem to use them to change the animation based on input from the code.
I should be able to display an image for the gauge and an image for the needle and then change the rotation of the needle image according to the values so as to make it look animated but I don't know how to do this.
Is there a way a do what I'm trying to do?
Is there a better/alternate way?
I'm using the standard Android SDK and eclipse and trying to support devices with android v2.2 upwards.
Hi Ayos Draw the background, a solid color is best, then draw you indicator in a different color (black). When you get a new value, test to see if its different and if it is draw the indicator in the background color (erase it). Then draw the new indicator. You will need to figure out a way of scaling your raw data and making it in to degrees to draw the indicator. Its best if your difference test allows you to not change the indicator if the change does not move the indicator less that a degree or two. If you use an image or pattern for the background then you have a big job in erasing the indicator. You might have to redraw the background every time, that takes a lot of processing time and the gauge may look jumpy. Cliff
Related
I want to draw Strings in my Libgdx game but i cant use BitMap Fonts because the scale of my game is to smal to use them.
It sounds like you mean the scale of your viewport is too small to show fonts correctly. There are two solutions. The first is better for legibility while the second is quick and dirty.
One is to use a second viewport for the UI that has an appropriate scale for text. You would first call gameViewport.apply(), draw the game, and end the batch. Then use uiViewport.apply() and then draw the UI. The downside with this method would be if you want to draw text that aligns with moving objects in the game, you would have to use the two viewports to convert coordinates. Otherwise, this is the ideal method to get a crisp looking UI. Ideally you would use a ScreenViewport and select a font size at runtime based on the screen dimensions, either by shipping your game with multiple versions of the font at different scales, or by using FreeTypeFontGenerator.
The second method is to scale down all your text. First call bitmapFont.setUseIntegerPositions(false) do it won't round off positions to integers. Then call bitmapFont.setScale() with however much you want to shrink it to fit in your game viewport.
There is a gdx-freetype project:
https://www.badlogicgames.com/wordpress/?p=2300
and it uses TrueType fonts as source to generate bitmap font on the fly.
Not sure how stable this is - didn't use it.
I am new to computer vision but I am trying to code an android app which does the following:
Get the live camera preview and try to detect one logo in that (i have the logo in my resources). In real-time. Draw a rect around the logo if found. If there is no match, dont draw the rectangle.
I already tried a couple of things including template-matching and feature detection using ORB.
Why that didnt work:
Template-matching:
Issues with scaling and rotation. I tried a multi scale variant of it but a) the performance was really bad and b) the rectangle was of course always shown trying to search for the image. There was no way to actually confirm in the code if the logo was found or not.
ORB feature detection:
Also pretty slow (5-6 fps) but it worked ok-ish. The other problem was that also i never could be sure if the logo was in the picture or not. ORB found random matches even if the logo was not in the picture.
Like I said, I am very new to this. I would appreciate the help on what would be the best way to achieve:
Confirm if a picture A (around 200x200 pixels) is in ROI of camera picture (around 600x600 pixels).
This shouldnt take longer than 50ms per frame. I dont know if thats even possible though. So if a correct way to do this would take a bit longer than that, I would just do the work in a seperate thread and only analyze like every fifth camera frame or so.
Would appreciate any hints or code examples on how to achieve that. Thank you!
With logo detection, I would highly recommend using OpenCV HaarClassifier. It is easy to generate training samples from a collection of images of the logo, or one logo image with many distortions.
If you can use a few rules like the minimum and maximum size of the logo to be detected, and possible regions on the image where it can appear, you can run the detector at a speed better than you mention with ORB.
I need to put a flag waving animation as my Android background. The only way I found is creating number of images which shows each motion in waving and calling them.
Instead, is there a better way? Some languages got basic animation options like wave(), shake() etc, so is there any command in android so I can animate the image, without using number of images?
Or else, is there a way I can set a Flash animation (.swf) as the background?
Best reader,
I'm stuck on one of my concepts.
I'm making a program which classroom children can measure themselves with.
This is what the program includes;
- 1 webcam (only used for a simple webcam view.)
- 2 phidgets (don't mind these.)
So, this was my plan. I'll draw a rectangle on the webcamview and make it repaint itself constantly.
When the repainting is stopped by one of the phidgets, the rectangle's value will be returned in centimeters or meters.
I've already written the code of the rectangle that's repainting itself and this was my result:
(It's a roundRectangle, the lines are kind of hard to see in this image, sorry about that.)
As you can see, the background is now simply black.
I want to set the background of this JFrame as a webcam view (if possible) and then draw the
rectangle over the webcam view instead of the black background.
I've already looked into jmf, fmj and such but am getting errors even after checking my webcam path and adding the needed jar libraries. So I want to try other options.
So;
- I simply want to open my webcam, use it as background (yes live stream, if possible in some way).
And then draw this rectangle over it.
I'm thus wondering if this is possible, or if there's other options for me to achieve this.
Hope you understand my situation, and please ask if anything's unclear.
EDIT:
I got my camera to open now trough java. The running camera is of type "Process".
This is where I got the code for my camera to open: http://www.linglom.com/2007/06/06/how-to-run-command-line-or-execute-external-application-from-java/
I adjusted mine a little so it'll open my camera instead.
But now I'm wondering; is it possible to set a process as background of a JFrame?
Or can I somehow add the process to a JPanel and then add it to a JFrame?
I've tried several things without any succes.
My program as it is now, when I run it, opens the measuring frame and the camera view seperatly.
But the goal is to fuse them and make the repainting-rectangle paint over the camera view.
Help much appreciated!
I don't think it's a matter of setting a webcam stream as the background for your interface. More likely, you need to create a media player component, add it to your GUI, and then overlay your rectangles on top of that component.
As you probably know from searching for Java webcam solutions in Stack Overflow already, it's not easy, but hopefully the JMF Specs and API Guide will help you through it. The API guide is a PDF and has sections on receiving media streams, as well as sample code.
Currently I am developing an application for decoding barcodes using mobile phones.
I have a problem with how to draw a line or a square on the camera screen to easily capture the barcode.
What is the easiest way of doing it?
Unfortunately this isn't as easy as it sounds. If you have a preview image from a phone's camera then it's often rendered as an overlay. This means that the camera preview image doesn't actually form any part of your application's canvas and you can't interact directly with the pixels. The phone simply draws the preview on top of your appliction, completely out of your control.
If you draw a line on your screen, then it will be drawn underneath the preview image.
The way around this isn't too pretty. You need to actually capture an image from the camera. Unfortunately this means capturing a JPEG or a PNG file into a byte buffer. You then load this image using Image.createImage and render that to the screen. You can then safely draw on top of that.
This also has the undesirable downside of giving you an appalling frame-rate. You might want to enumerate all the possible file formats you can capture in and try them all to see which one is quickest.
You can do this by using OverlayControl, assumming that your target devices support it.
I think i remember seeing a good example # Sony Ericsson developer forums.
Edit: found this (does not involve use of OverlayControl)