Know when phone enters building on android device - java

I was wondering if there is a way for me to detect if the users device is being "obstructed" by a building or roof of some sort. Im developing a very precise location based app and its KEY that my users get alerted if something is wrong with there GPS or something is getting in the way. Physical object.
EDIT: The app ive created strictly takes snapshots too its not something thats constantly going. Just a quick snapshot.

Not directly. You can try calling LocationManager.getGpsStatus and iterating over the list of satelites every so often and looking for a jump in signal to noise ratio since the last reading. Getting a working algorithm is going to take a good amount of work and testing on a variety of devices with different GPS chips.

Related

Firebase Jobdispatcher - to use or not to use

I am developing an App, a simple, but hopefully addictive little game. The user has to solve predefined levels, as quick as possible.
Information on the levels is stored online in an MySQL database, which also contains the average time it took all players to complete a given level. Also, the level-data is stored, locally, in a SQlite database on the phone.
What I want to do is the following. I want to synchronize the average time (from server to phone) and upload the time it took a player to complete a leve (from phone to server).
Ideally this happens each time the player starts the app or finishes a level. For this, I am considering a Firebase Jobdispatcher, but I was wondering if this is overkill or not. For your information: it is not the end of the world if the average time stored on the phone is not entirely up to date. The game will work just fine without it being up to date. On the other hand, I want it to get updated regularly as the performance of the user will be compared to the average time.
I am a beginner, who wants to do things correctly. Hope you can help.
It sounds like you already know when some work should happen. As you said:
Ideally this happens each time the player starts the app or finishes a level.
You don't need JobDispatcher to schedule work when you are already in control of the times when the work should happen. JobDispatcher is used when you need to schedule some work at some point in time or interval when your app may not even be running.

Does the getFocusDistances() camera API function actually work for Nexus 5? or any other device?

I would like to determine the distance of an object from my Nexus 5 camera, preferably without using an object like a coin for scale. I figured the Camera.Parameters getFocusDistances function would work for this.
I attempted to do this via something like the following in my takePicture() jpeg callback:
Parameters params = camera.getParameters();
Float focusDistances[] = new float [3];
params.getFocusDistances( focusDistances );
I tried running this a few times with objects of different distances from the camera, though each time, focusDistances[FOCUS_DISTANCE_NEAR_INDEX], focusDistances[FOCUS_DISTANCE_OPTIMAL_INDEX], and focusDistances[FOCUS_DISTANCE_FAR_INDEX] all contained the value positive infinity.
It's possible I'm doing something wrong, in which case please let me know if there is a specific way I'm which this will work on the Nexus 5. However the android API specifically states you can call getParameters() (and then getFocusDistances()) at any time to get the latest focus distances and therefore I think this should work. One thing I haven't tried yet is doing the above in an on auto focus handler, however I don't see why this should matter.
I did some research to try and see what was going on, and I found several questions regarding this sort of behavior from getFocusDistances() and typically the answer, if there was one, was that the function is not supported by the android API and/or the hardware manufacturer. Now a lot of these discussions I found online were from several years ago, and dispite the questionable feelings it gives me about getFocusDistances, I've still seen this function suggested to be used for getting the focus distance so I figure it must work on SOME device for SOME android API version.
Does anybody know if getFocusDistances() works for any particular version of android on the Nexus 5? If not, does anybody know ANY device it does work on?
EDIT:
Since posting, I have tried obtaining the focus distances in the onAutoFocus handler, as well as trying a bit more extensively for objects atvarious distances. The results have been consistent - positive infinity is always returned for all 3 focus distances (NEAR, OPTIMAL, and FAR). I even tried this with a Nexus 7 and getFocusDistances always returns the constant values (0.95, 1.9, and infinity), so apparently getFocusDistances isn't implemented on that device either.
Therefore, I really have two questions:
Is there any way to get somewhat accurate focus distances using the android Camera API with the Nexus 5? I'm even wondering if there is custom android version where getFocusDistances is actually implemented, since otherwise I may attempt to do so myself depending on what I find when examining the API code.
Are there any android capable devices that are known to implement getFocusDistances in a somewhat accurate manner?
First of all, It's very difficult to measure the object distance from one single shot/view. You would find many research papers which tried to employ vision based techniques to judge the object distance. I can refer you one such paper. They tried to implement a positioning system that would solely work on mobile camera+sensors. You would probably realize how non-trivial it is to measure the object distance from one single camera view. They finally used a method called "structure from motion" vision technique to calculate the distance (From multiple photos taken from multiple angle).
Even traditional apps like SmartDistance and SmartMeasure needs to use geometric tricks to measure the distance. None of them could only rely on camera parameters. Sorry for the elongated introduction. I have done a project of this sort before and I am telling you all these based on my experience.
To answer your query, I haven't found any Android device yet which returns realistic values of focus distances. They are either returned as some constant values or sometimes 0 and infinity. I found someone reporting that it worked for Galaxy Nexus but only within 30cm object distance, it doesn't work for distances more than that. The bottom line is that you cannot rely on this function from camera API which is heavily dependent on the device drivers. And, phone camera's are not well-known for their lens/sensor qualities. It would be very very difficult for you to work on any optics based formula for mobile-phone cameras. I would suggest you to rather go for some sensor based geometric tricks.

Is it possible to modify the time OSCeleton sends a lost_user event/message?

I'm playing around with OSCeleton and Processing and succesfully got to track skeletons and do stuff.
What I'm wondering is if there's any way to change the delay time a "lost_user" message is sent to Processing.
This is taking so long for what I'm trying to achive, since i need to stop tracking a user as soon as he goes away from the screen, so I can accept another user's interaction. (imagine an installation where a lot of people wants to play with).
any help/tips would be really appreciated.
Jon
As far as I can tell from the OSCeleton's source and with my minimal experience with the kinect(I never used OSCeleton), there is no way to modify that code to do that. It seems to be a thing handled even lower, by the driver or by the kinect its self(?).
Yet you need not bind yourself with that, and I would suggest a couple of ways to bypass the problem if I understand properly.
First, the latest drivers and examples should have multi-user support, meaning you can just arrange who is your main user. From what I can tell from the source you do get an osc message in Processing when a new user is detected as well as an ID number. You can put each new user that arrives, into an arrayList and figure out a way to do things without depending on the latest user.
If you are still going for the user-after-user thing though, or I was mistaken about the multi-user support(which is mentioned nowhere in the README), you can check yourself whether a user has left the area. Although you can not get a definitive answer like that you can check for example, whether a specific joint or all joints of a user have moved in the last 10-20 osc messages received. That probably means storing the position of this joint in an 10-20 item array and continuously updating while also doing a check on whether the items are different. If all items in the array are the same, your user has not moved a bit and thus probably should not be taken to account.
Last but not least you can switch to other solutions. The one I used about a year ago was "Synapse for Kinect" which also seems stale now. The latest you can use is a Processing library called SimpleOpenNI which definitively have multi-user tracking and you won't need any intermediary programs running to give you the joints.
I hope this helps

Android Camera autofocus when user holds camera still

I'm sure most of you have used an android phone before and taken a picture. Whenever the user changes the mobile phone's position and holds it steady, the camera focusses automatically. I'm having a hard time replicating this in my app. The autofocus() method is being called only once when the application is being launched. I have been searching for a solution these past 3 days and while reading the google documentation I stumbled upon the sensor method calls (such as when the user tilts the mobile forwards or backwards). I could use this API to achieve what I need but it sounds too dirty and too complicated. I'm sure there's another way around it.
All examples on the internet which I have found only focus when the user presses the screen or a button. I have also gone through several questions on SO to hopefully find what I am looking for but I was unsuccessful. I have seen this question and that String is not compatible with my phone. For some reason the only focussing modes which I can use is fixed and auto.
I was hoping someone here would shed some light on the subject because I am at a loss.
Thankyou very much for your time.
Since API 14 you can set this parameter
http://developer.android.com/reference/android/hardware/Camera.Parameters.html#FOCUS_MODE_CONTINUOUS_PICTURE
Yes, camera.autoFocus(callback) is a one-time function. You will need to call it in a loop to have it autofocus continuously. Preferably you would have a motion detection via accelerometer or compass to detect when camera is moved.

how to calibrate the orientation sensor in android?

I'm writing an app in Google Android 2.1 that needs to know which direction (n/w/s/e) the device (HTC Hero) is facing. The sensor and its listener are working great, but the values I get from the sensor are totally crappy. e.g. it tells me I'd be facing north when the device is facing SW or so...
This seems to be a known problem with android devices. The "solutions" I found on the web look like this:
shake the device around
move the device like an eight
tap on the devices back
This is thought to trigger the sensors recalibration. And: the thing with the "moving around" works for me... but that's not very professional I guess...
so - how do I trigger the recalibration of the orientation sensor from the SDK? I need the sensor to be properly calibrated without any fancy stuff that would make users of this app look like complete idiots while they are "manually" recalibrating their phones...
Is there any way to do this "right"?
EDIT:
Or: is there any way to determine PROGRAMMATICALLY, if the device is correctly calibrated or not? As a fallback-option so to speak... then I could warn the user that the device needs "manual" recalibration.
I don't believe there is a way to know programatically if you compass sensor is calibrated correctly unless you use a secondary data source like GPS. If you can use GPS then when the user is moving you can compare the GPS movement with the compass heading and correct. Remember that local magnetic fields can screw up the compass readings and the devices has no idea if you are out in the middle of a forest or next to a transformer.
With these micro devices there is always a bit of skew you'll have to deal with. If you check the values for the accelerometer as well you'll see that at rest they aren't always returning 9.8 m/s^2 (or at least consistently between devices).
In your help you may just need to tell the user to rotate/twist their phone in a figure eight to reset the compass.
I assume you are referring to the Magnetometer inside the Hero.
Callibrating it is a tough one and will/should always require user interaction for a realiable callibration. There are seperate strategies to deal with that. You could ask users to hold there device in north direction and then recallibrate. If the users don't know where north is, you can ask them to direct zhe device towards the sun and based on location and time you can calculate where that is.
Leaving callibration aside, I would guess that your problem is that the readings you get from the sensor are inaccurate. Of course callibration is a prerequisite for accurate readings, but there are also other factors in play.
It is common practice to complement sensor data from one sensor with the data a different sensor to increase accuracy. You could use the GPS to determine a heading when the user is moving. If he's moving slowly however, this is inaccurate as well. You could integrate the data reported by the Accelerometer to guess about orientation changes (not the absolute orientation). But honestly a Gyrometer would be more ideal in this case.
Systems that work like this are sometimes called Inertial Navigation Systems (INS) because they can, given a fixed point in space, determine their subsequent relative position and orientation accurately without further external data. Using a Kalman filter is common practice to recallibrate the system from time to time when an absolute position (e.g. retrieved via GPS) is available.
Although it is unrealistic to implement a full-fledged INS, you can certainly draw a few ideas from how they work to make your orientation readings more accurate.

Categories