Dear programmers/scripters/engineers/other people,
The problem:
I'm currently developing an augmented reality application for an Android 3.2 tablet and having some issues with getting an accurate compass reading. I need to know exactly the (z) direction the tablet is facing, measured from the north. It doesn't matter if it's in degrees or radians.
What I currently have tried:
I used the magnetometer and accelerometer to calculate the angle. It has one big disadvantage. If you rotate 90 degrees, the sensors will measure a larger or a smaller angle. Even when I'm in an open field far away from metal or any magnetic objects. Even the declination doesn't solve it.
Using the gyroscope would be an option. I have tried to measure the rotating speed and store the measured units into a variable to know the exact view direction. There seems to be an factor that causes distortion though. I found out that fast rotations distort the accurate direction measurement. The gyro's drift wasn't that much troublesome. The application checks the other sensors for any movement. If none is detected, the gyro's rotation change won't be taken into account.
The rotation vector works okay. It has some issues like the gyroscope. If you move slowly and stop at a suddenly moment, it will drift away for a few seconds. Another problem is that it will be inaccurate with quick rotations depending on the speed and how many turns you've made. (You don't want to know how my co-workers are looking at me when I'm swinging the tablet in all directions...)
Sensor.Orientation, not much to say about. It is deprecated for some reason so I won't use it. A lot of examples on the internet are using this sensor and it's probably the same thing as the magnetic/accelerometer combination.
So I'm currently out of idea's. Could you help me with brain storming / solution solving?
Thanks in advance, yours sincerely, Roland
EDIT 1:
I am willing to provide the code I have tried.
I'm summing up our comments:
its clear from this video that the sensors on phones are not very accurate to begin with. Also interesting to read is this
it its important that the user calibrates the sensors by doing a figure 8 motion holding the phone flat. An App can programmatically check if such a calibration is necessary and notify the user. See this question for details
To eliminate jitter the values obtained from the sensors need to be filtered by a low pass filter of some kind. This has also been discussed on StackOverflow.
The orientation obtained is not the true north. To obtain the true north one must use GeomagneticField
Related
A few years ago my brother and I wrote a Java code for the Mandelbrot Set. Yesterday I wanted to find some cool zooms with it but as I did more intense zooms I started to notice an issue (at a zoom value of around 1E14). It appears that pixels are being grouped together and sometimes create a weird strippy effect.
Messed Up Mandelbrot Zoom
Above is a picture of the issue (this is supposed to be in 4k).
Here are some links of other, less deep zooms (they have to be google links because they are too big):
https://photos.app.goo.gl/c2hUHM7sSmvKxYbQ6 https://photos.app.goo.gl/nG2cgjJ7vn7XYf8KA https://photos.app.goo.gl/TtpF1Q6hjojHSn747
The issue is amplified as you zoom further and further in until only one color appears. The Mandelbrot set works
When we made the program, we tried to use the shading shown in the images on the Wikipedia article about the Mandelbrot set. The only info we could find out about it was that it was a cubic interpolated coloring scheme which gave it a smooth transition look. We spent a long time trying to figure it out but eventually, we did. The thing that made it hard was the fact that the curve could not exceed the RGB limits of 255, so the curves also had to be monotonic and also the only thing we could really find to help were two Wikipedia articles about this type of interpolation. We created the code from scratch and once we figured out how to cade the cubic interpolation, I worked on getting the perfect colors to use with it. Attached is the .jar and our code (it's very messy, sorry were amateurs):
code: https://drive.google.com/file/d/186o_lkvUQ7wux5y-9qu8I4VSC3nV25xw/view?usp=sharing
executable file (if you want): https://drive.google.com/file/d/1Z12XI-wJCJmI9x0_dXfA3pcj5CNay3K-/view?usp=sharing (you must hit enter after you enter each value)
I am hoping someone can help me troubleshoot the issue. Or let me know if they have experienced this issue as well.
First, it's not obvious that the image you provided is wrong. The nature of Mandelbrot is that new details spring up as the zoom increases.
If there is a problem, it's almost certainly numerical stability. Doubles have 53 bits of precision. Your code is pretty unreadable, so I'm not trying to read it. But if you're doing things like subtracting the upper window boundary from the lower when the window is centered away from the origin, say at (-1,0), but with tiny size... You mentioned ~10^-14. Then the subtraction result throws away significance of about 10^14. That's around 47 bits. What's left is only 6 bits, so the precision of computation has dropped to 1/64. That's not very precise. It gets worse farther from the origin and for smaller differences.
Consider reading What Every Computer Scientist Should Know About
Floating-Point Arithmetic. It will let you see your code in a new light. Math translated directly to floating point computation often explodes. This paper explains the basis for avoiding the pain.
A less intimidating read is here.
One more note: I did scan your code briefly. Please check out Horner's Rule to improve both precision and speed.
I am currently working on a road sign detection application on Android, using OpenCV, and found out that while processing frames in real-time, my camera often get focus on brighter parts of image, such as sky, and everything below (road, trees, and signs) is getting dark. Because of it my application is not able to detect these signs because they are too dark in this particular condition.
Have anyone of you had to do with such problem and found decent solution? If so I would appreciate any clues to do this (especially with good performance which is important in real-time processing).
As a preprocessing, you can apply intensity normalization. As a particular example histogram equalization can be applied:
http://docs.opencv.org/doc/tutorials/imgproc/histograms/histogram_equalization/histogram_equalization.html
with an example code in java:
http://answers.opencv.org/question/7490/i-want-a-code-for-histogram-equalization/?answer=11014#post-id-11014
Note that such additional steps may slow down your overall detection operation.To increase overall speed, you can decrease your region-of-interest by sky detection.
You said that the camera gets focused on bright objects such as sky.
In most modern phones you can set the area of the image which is included in auto focus calculation. Since the sky is always in the upper part of the image (after you take care of phone orientation) you can set the focus zone to the lowest half of the image. This will take care of the problem in the first place.
If however you meant that the camera is not focused on the bright objects but rather does white balancing using the white objects, you can solve this in the same way as described for focus. If that does not help, try histogram equalization and gamma correction techniques. This will help improve the contrast
I’m very new to Android programming and the one thing that really has me confused relates to screen density and screen dimensions. I’ve read plenty of replies to other questions on here and I’ve read the Google docs on how to program for multiple screen sizes. None have really helped address either the problem or my own general ignorance. I hope it is okay to ask this here so somebody might finally explain it simply enough so that I’ll be able to wrap my brain around this problem.
First of all, I’ve been working with SurfaceViews onto which I’m throwing bitmaps. I’ve been primarily programming for the Samsung Note 10.1 (2014) edition. The screen is 2048x1536 and returns a screen density of 2.0 when I query the display. My approach has been to make graphics that work at those dimensions but within the code, I’ve used the oft-quoted formula to convert floating point dp coordinates into pixels, ready for the moment I move to other devices.
px = (dp * density) + 0.5f
I’ve now been trying to get the app working on a Samsung S2. The screen is 480 by 800.On the phone, the app is (I assume correctly) loading graphics from the HDPI folder because the pixel density is 1.5.
My first problem was that the graphics in the HDPI were originally far too big. I’d used the Resize program to quickly resize my original XHDPI folder. Perhaps I simply didn’t select the correct source setting but the resulting graphics where far bigger than the actual 480x800 graphic I finally found filled the screen.
However, that was only a symptom of my larger confusion.
When developing an app using bitmaps, is there some magic formula I’ve missed which allows dp values to be translated to pixels or should I be doing calculations based on the actual screen dimensions? By the formular, 100dp is approximately 150px on the (1.5 density) 800px wide screen but 200px on the bigger (2.0 density) 2560 display. That’s 18% horizontally across the S2’s screen but only 8% across the wider screen on the Note 10.1.
I naively assumed that a dp value would translate across all devices and simply put things in the right place or do I have that wrong? Just writing this up makes me even more convinced that I misunderstood what dp values are. I was confused by the suggestion of working to a theoretical Google device with a pixel density of 1 and then adapting everything based on other pixel densities or screen sizes.
Simply to say, as I keep hearing, work in dp unites so everything is uniform hasn’t quite worked for me so I’m now seeking the advice of wiser council. In other words: please help!
Thanks.
I'm working on an Android project. it's goal is to detect predefined movement gesture of the device. if the device rotates 45 degrees over an axis(X,Y or Z) and then rotates back to its first position( the first and second positions are not going to be accurate, I mean if it was rotated 50 degrees instead of 45, not important !!!)then the gesture has happened and the app should detect it.
I tried to do that using Accelerometer and Magnetic sensors of device to continually monitor the orientation of the device and detect gesture but the results weren't acceptable(explained here). any start point or idea please ?
It doesn't seem like anybody is going to provide a concrete and valuable answer. So let me try to do that.
First of all, even a bit primitive and straightforward approach allows to spot the fact you do not need to process all the data coming from sensors. Moreover humans are not that fast, so there is no need to proceed 10000 values per second in order to identify any specific move as well.
What you actually need is just to identify key points and make your decision. Does it sound like a tanget to you?
What I'm actually suggesting is to test your solution using an ordinary mouse and available gesture recognition framework. Because the actual idea is pretty much the same. So please check:
iGesture - Gesture Recognition Framework
Mouse Gestures
It such a way it might be easier to develop a proper solution.
Update
Let's imagine I'm holding my phone and I need to rotate it 90 degrees counterclockwise and then 180 degrees clockwise. I hope you will not expect me to do some complex 3D shapes in the air (it will break usability and frankly I do not want to loose my phone), so it is possible to say there might be a point we can track or we can easily simulate it.
Please see my other answer in order to see simple, but working solution of a similar problem:
So I want to know how you would measure different light intensities when a finger is pressed on the android device's camera with flash on. I have read throughout the internet about exposure, light sensors, etc., but I don't know where to start off :(. So far I have made a program that opens up the camera using surfaceholders and surfaceviews with flash on as well. I put my thumb against the camera and I can see that my thumb turned to a pinkish color with small color changes throughout the area of my thumb. How can I take this information from the camera and use them for other stuff, like measuring heart rate? Thank you so much.
You might want to investigate looking at ratios between red and blue light, instead of absolute brightness. You may find that this measurement helps get rid of some of the common mode noise likely to exist in an absolute brightness measurement.
Your blood doesn't actually turn blue when it isn't oxygenated, but it does change to a different shade of red. You might be able to make a primitive O2 saturation measurement with that camera. You can pick up an actual home meter for O2 saturation / pulse meter at a local pharmacy for less than $50, if you want some real data to correlate with. I believe that the "real" sensors correlate an IR measurement with red light.
You also might want to see if there is some kind of auto white-balance going on with the image sensor that needs to be disabled (this would be model specific to whatever device you are using).
What are you trying to do? I'll assume that you're trying to measure your heart rate by the amount of blood in your finger. so basically you have 2 states, one with more blood and one with less.
I would start by measuring the average brightness of the picture like Totoo mentioned. After you know how to do this, make a program that will identify what state the finger is in, from the picture - Say, if the average brightness is less than 50, your heart just pumped, making it state 2. Otherwise, it hasn't, and it will be in state 1.
After you know how to do this, you can know when it switches from state 1 to state 2 and the other way around. And by dividing the amount of state switches by (time passed * 2), you'd get the heart rate.
hope I helped :)