So I want to know how you would measure different light intensities when a finger is pressed on the android device's camera with flash on. I have read throughout the internet about exposure, light sensors, etc., but I don't know where to start off :(. So far I have made a program that opens up the camera using surfaceholders and surfaceviews with flash on as well. I put my thumb against the camera and I can see that my thumb turned to a pinkish color with small color changes throughout the area of my thumb. How can I take this information from the camera and use them for other stuff, like measuring heart rate? Thank you so much.
You might want to investigate looking at ratios between red and blue light, instead of absolute brightness. You may find that this measurement helps get rid of some of the common mode noise likely to exist in an absolute brightness measurement.
Your blood doesn't actually turn blue when it isn't oxygenated, but it does change to a different shade of red. You might be able to make a primitive O2 saturation measurement with that camera. You can pick up an actual home meter for O2 saturation / pulse meter at a local pharmacy for less than $50, if you want some real data to correlate with. I believe that the "real" sensors correlate an IR measurement with red light.
You also might want to see if there is some kind of auto white-balance going on with the image sensor that needs to be disabled (this would be model specific to whatever device you are using).
What are you trying to do? I'll assume that you're trying to measure your heart rate by the amount of blood in your finger. so basically you have 2 states, one with more blood and one with less.
I would start by measuring the average brightness of the picture like Totoo mentioned. After you know how to do this, make a program that will identify what state the finger is in, from the picture - Say, if the average brightness is less than 50, your heart just pumped, making it state 2. Otherwise, it hasn't, and it will be in state 1.
After you know how to do this, you can know when it switches from state 1 to state 2 and the other way around. And by dividing the amount of state switches by (time passed * 2), you'd get the heart rate.
hope I helped :)
Related
ARCore will not be able to detect surfaces if the lighting is insufficient.
Question : how to detect insufficient lighting in order to be able to inform the user ?
I could use a timer to display an alert after a few seconds but I will not know if the lack of surface detection is due to an insufficient lighting or another reason (no feature points, etc...)
So, how to check if the insufficient lighting is the probable reason of the non-detection of planes ?
Thanks.
I am not really sure how you could go on about this since I only used ARCore in Unity, but maybe you could measure the brightness of the pixels of the screen. You could use the average to see if it is too dark or too bright. Also, you could use average deviation to determine if pretty much all of the screen is dark/bright (low deviation), and not just specific parts (higher deviation).
Unfortunately, there's no API for this. We let the user do the calibration sequence (for us it's catching some objects so that the user has to move the phone around) and if we don't have a plane after X tries we show a dialog like telling the user that he/she should find a place with better lighting or a more structured floor.
I am currently working on a road sign detection application on Android, using OpenCV, and found out that while processing frames in real-time, my camera often get focus on brighter parts of image, such as sky, and everything below (road, trees, and signs) is getting dark. Because of it my application is not able to detect these signs because they are too dark in this particular condition.
Have anyone of you had to do with such problem and found decent solution? If so I would appreciate any clues to do this (especially with good performance which is important in real-time processing).
As a preprocessing, you can apply intensity normalization. As a particular example histogram equalization can be applied:
http://docs.opencv.org/doc/tutorials/imgproc/histograms/histogram_equalization/histogram_equalization.html
with an example code in java:
http://answers.opencv.org/question/7490/i-want-a-code-for-histogram-equalization/?answer=11014#post-id-11014
Note that such additional steps may slow down your overall detection operation.To increase overall speed, you can decrease your region-of-interest by sky detection.
You said that the camera gets focused on bright objects such as sky.
In most modern phones you can set the area of the image which is included in auto focus calculation. Since the sky is always in the upper part of the image (after you take care of phone orientation) you can set the focus zone to the lowest half of the image. This will take care of the problem in the first place.
If however you meant that the camera is not focused on the bright objects but rather does white balancing using the white objects, you can solve this in the same way as described for focus. If that does not help, try histogram equalization and gamma correction techniques. This will help improve the contrast
I'm working on an Android project. it's goal is to detect predefined movement gesture of the device. if the device rotates 45 degrees over an axis(X,Y or Z) and then rotates back to its first position( the first and second positions are not going to be accurate, I mean if it was rotated 50 degrees instead of 45, not important !!!)then the gesture has happened and the app should detect it.
I tried to do that using Accelerometer and Magnetic sensors of device to continually monitor the orientation of the device and detect gesture but the results weren't acceptable(explained here). any start point or idea please ?
It doesn't seem like anybody is going to provide a concrete and valuable answer. So let me try to do that.
First of all, even a bit primitive and straightforward approach allows to spot the fact you do not need to process all the data coming from sensors. Moreover humans are not that fast, so there is no need to proceed 10000 values per second in order to identify any specific move as well.
What you actually need is just to identify key points and make your decision. Does it sound like a tanget to you?
What I'm actually suggesting is to test your solution using an ordinary mouse and available gesture recognition framework. Because the actual idea is pretty much the same. So please check:
iGesture - Gesture Recognition Framework
Mouse Gestures
It such a way it might be easier to develop a proper solution.
Update
Let's imagine I'm holding my phone and I need to rotate it 90 degrees counterclockwise and then 180 degrees clockwise. I hope you will not expect me to do some complex 3D shapes in the air (it will break usability and frankly I do not want to loose my phone), so it is possible to say there might be a point we can track or we can easily simulate it.
Please see my other answer in order to see simple, but working solution of a similar problem:
Dear programmers/scripters/engineers/other people,
The problem:
I'm currently developing an augmented reality application for an Android 3.2 tablet and having some issues with getting an accurate compass reading. I need to know exactly the (z) direction the tablet is facing, measured from the north. It doesn't matter if it's in degrees or radians.
What I currently have tried:
I used the magnetometer and accelerometer to calculate the angle. It has one big disadvantage. If you rotate 90 degrees, the sensors will measure a larger or a smaller angle. Even when I'm in an open field far away from metal or any magnetic objects. Even the declination doesn't solve it.
Using the gyroscope would be an option. I have tried to measure the rotating speed and store the measured units into a variable to know the exact view direction. There seems to be an factor that causes distortion though. I found out that fast rotations distort the accurate direction measurement. The gyro's drift wasn't that much troublesome. The application checks the other sensors for any movement. If none is detected, the gyro's rotation change won't be taken into account.
The rotation vector works okay. It has some issues like the gyroscope. If you move slowly and stop at a suddenly moment, it will drift away for a few seconds. Another problem is that it will be inaccurate with quick rotations depending on the speed and how many turns you've made. (You don't want to know how my co-workers are looking at me when I'm swinging the tablet in all directions...)
Sensor.Orientation, not much to say about. It is deprecated for some reason so I won't use it. A lot of examples on the internet are using this sensor and it's probably the same thing as the magnetic/accelerometer combination.
So I'm currently out of idea's. Could you help me with brain storming / solution solving?
Thanks in advance, yours sincerely, Roland
EDIT 1:
I am willing to provide the code I have tried.
I'm summing up our comments:
its clear from this video that the sensors on phones are not very accurate to begin with. Also interesting to read is this
it its important that the user calibrates the sensors by doing a figure 8 motion holding the phone flat. An App can programmatically check if such a calibration is necessary and notify the user. See this question for details
To eliminate jitter the values obtained from the sensors need to be filtered by a low pass filter of some kind. This has also been discussed on StackOverflow.
The orientation obtained is not the true north. To obtain the true north one must use GeomagneticField
I was wondering if there is a way to find out the size of a persons finger when they touch an android device. I want to know this so i can change the sensitivity of certain objects in my game.
A solution for devices that don't offer the touch.getSize() function, you can have on the start screen a high resolution plot of 2d points. On touch, detect the ones touched and there you can have a pretty accurate area of how big the finger is.