I'm working on an Android project. it's goal is to detect predefined movement gesture of the device. if the device rotates 45 degrees over an axis(X,Y or Z) and then rotates back to its first position( the first and second positions are not going to be accurate, I mean if it was rotated 50 degrees instead of 45, not important !!!)then the gesture has happened and the app should detect it.
I tried to do that using Accelerometer and Magnetic sensors of device to continually monitor the orientation of the device and detect gesture but the results weren't acceptable(explained here). any start point or idea please ?
It doesn't seem like anybody is going to provide a concrete and valuable answer. So let me try to do that.
First of all, even a bit primitive and straightforward approach allows to spot the fact you do not need to process all the data coming from sensors. Moreover humans are not that fast, so there is no need to proceed 10000 values per second in order to identify any specific move as well.
What you actually need is just to identify key points and make your decision. Does it sound like a tanget to you?
What I'm actually suggesting is to test your solution using an ordinary mouse and available gesture recognition framework. Because the actual idea is pretty much the same. So please check:
iGesture - Gesture Recognition Framework
Mouse Gestures
It such a way it might be easier to develop a proper solution.
Update
Let's imagine I'm holding my phone and I need to rotate it 90 degrees counterclockwise and then 180 degrees clockwise. I hope you will not expect me to do some complex 3D shapes in the air (it will break usability and frankly I do not want to loose my phone), so it is possible to say there might be a point we can track or we can easily simulate it.
Please see my other answer in order to see simple, but working solution of a similar problem:
Related
I need your help.
I am trying to implement a paint system with live-tracking, you may think: “Live-tracking? What do you mean?”
So I am implementing a solution to track each touch movement on the display, so I can decide, if I want to paint on a specific spot or not, this decision must be in real-time, right when the user touches the display.
There will be some images where the user should paint the inner white space inside of the respective shape; figure 1 shows an example of this shape.
The user should paint the inside of the shape in a specific direction/order; I still don’t know how I can configure this order/direction in the image. An example of a specific order would be like the one showing on figure 2.
So I want the user to paint starting from the zone 1 to the zone 6, do you understand? I don’t want the user to be able to paint the zone 1 and then zone 3, without painting the zone 2 first…
In the end, I would have the shape filled, like the figure 3 shows. The arrows are there just to show the right direction of drawing.
It’s not just about direction, the specific spots on the image matter, there will be different types of images, and I can’t use only the direction.
How can I determine the specific spots in an image with a specific order to be painted?
The only idea that came to my mind, would be some kind of training mode, where I (the developer) would draw the image with the order I want, and the system behind would create virtual points on the code, with a small distance between each of them. Then I would have an array with all the “spots” saved with the order I draw them, do you understand?
I don’t know if this will work, and if it will be fast enough to travel my array of “spots” when the user is trying to draw on the image? Because I want this to work on real-time, the system should check if it’s a valid spot to draw right when the user touches the display…
Do you think it will work? Do you have some recommendations about this? What are the best structures and stuff like that? I am not asking for written code, I just need some guidelines.
I think I explained my problem well, if you have any question please ask me.
Thank you very much in advance!
if all your shapes are lines you can treat the points you want your users to "touch" as squares, every time an ACTION_MOVE event is happening, check if the finger is touching the next point. You can make the size of the squares whatever suits you, like 48dp or 92dp or the width/height of the shape you are drawing.
You said you don't want any code so I assume you know how to do this (if not reply and I will post code too).
Also you didn't mention what happens if the user goes out of the shape, if you want the user to continue drawing until they lift their finger then this way is good, when the ACTION_UP event happens check if the last point is touched (assuming it can only be touched if all others are touched like explained above).
Here is an image if you'd like to have a visual guide:
Obviously if you it this way you have to calculate the positions of each point you will have, I doubt there is any way to automate this.
If you need any more help or you need some code please reply and I will post some! :D
Dear programmers/scripters/engineers/other people,
The problem:
I'm currently developing an augmented reality application for an Android 3.2 tablet and having some issues with getting an accurate compass reading. I need to know exactly the (z) direction the tablet is facing, measured from the north. It doesn't matter if it's in degrees or radians.
What I currently have tried:
I used the magnetometer and accelerometer to calculate the angle. It has one big disadvantage. If you rotate 90 degrees, the sensors will measure a larger or a smaller angle. Even when I'm in an open field far away from metal or any magnetic objects. Even the declination doesn't solve it.
Using the gyroscope would be an option. I have tried to measure the rotating speed and store the measured units into a variable to know the exact view direction. There seems to be an factor that causes distortion though. I found out that fast rotations distort the accurate direction measurement. The gyro's drift wasn't that much troublesome. The application checks the other sensors for any movement. If none is detected, the gyro's rotation change won't be taken into account.
The rotation vector works okay. It has some issues like the gyroscope. If you move slowly and stop at a suddenly moment, it will drift away for a few seconds. Another problem is that it will be inaccurate with quick rotations depending on the speed and how many turns you've made. (You don't want to know how my co-workers are looking at me when I'm swinging the tablet in all directions...)
Sensor.Orientation, not much to say about. It is deprecated for some reason so I won't use it. A lot of examples on the internet are using this sensor and it's probably the same thing as the magnetic/accelerometer combination.
So I'm currently out of idea's. Could you help me with brain storming / solution solving?
Thanks in advance, yours sincerely, Roland
EDIT 1:
I am willing to provide the code I have tried.
I'm summing up our comments:
its clear from this video that the sensors on phones are not very accurate to begin with. Also interesting to read is this
it its important that the user calibrates the sensors by doing a figure 8 motion holding the phone flat. An App can programmatically check if such a calibration is necessary and notify the user. See this question for details
To eliminate jitter the values obtained from the sensors need to be filtered by a low pass filter of some kind. This has also been discussed on StackOverflow.
The orientation obtained is not the true north. To obtain the true north one must use GeomagneticField
I have an object that continuously follows the users touch coordinates. I would like to make it so the object has an easing effect.
by which i mean, the object has a start point and when the user touches the screen, said object would move to the users touch coordinates. which it already does but it jumps to the coordinates. i want a controlled transistion from point A to point B.
this easing or tween affect would need to happen on every frame if the user dragged or swipped their touch coordinates.
i have been reading about interpolation and animation affects for the android sdk but i dont really understand how to implement them on an object and not a view. or continuously as well.
any direction would be great. thank you!
I built a complete tweening engine for Java. It doesn't allocate anything dynamically so it's totally safe for Android (I made it primarily to develop games on Android).
http://code.google.com/p/java-universal-tween-engine/
Your tween would look like:
Tween.to(yourObject, Type.POSITION, 1000)
.target(touchX, touchY)
.ease(Quad.OUT)
.start(aManager);
I used a similar syntax as the TweenMax engine, so you shouldn't be lost too long :)
I have an applet that I'm running online and I want to make sure people can't use the java Robot class to operate the applet. I know that yahoo does this on several of their game platforms and I was wondering if anyone knew how they accomplished it.
Watch mouse movement, and make sure you're not seeing "jumps" from one place to another, but movement over time instead. Sun/Oracle's J2SE tutorials show how to follow mouse movement events: http://download.oracle.com/javase/tutorial/uiswing/events/mousemotionlistener.html
Keep in mind that this would potentially fail to detect the difference between a robot and a person on something like a touch screen, or tablet input device.
One more thing to watch for is whether the user is clicking the same pixel, or just in the same vicinity. Humans are fairly imprecise, robots generally aren't unless programmed to be.
I would also put in a gesture logger for good measure that compiles this information, and keeps track of the actual movements of your users. If you suspect someone of cheating, you can then look at what their actual mouse movements looked like, and compare that with a known person. That will give you a better idea of what you need to look for than any of us can come up with off the tops of our heads.
keep track of the distribution of mouse positions over time. Humans move the mouse differently than a robot that knows exactly where to position it every single time it is clicked. Of course, a smarter robot can counter this defense.
I've been attempting to develop a means of synthesizing human-like mouse movement in an application of mine for the past few weeks. At the start I used simple techniques like polynomial and spline interpolation, however even with a little noise the result still failed to appear sufficiently human-like.
In an effort to remedy this issue, I've been researching into ways of applying machine learning algorithms on real human mouse movement biometrics in order to synthesize mouse movements by learning from recorded real human ones. Users would be compiling a profile of recorded movements that would trainh= the program for synthesis purposes.
I've been searching for a few weeks and read several articles on application of inverse biometrics in generating mouse dynamics, such as Inverse Biometrics for Mouse Dynamics; they tend to focus, however, on generating realistic time from randomly-generated dynamics, while I was hoping to generate a path from specifically A to B. Plus, I still need to actually need to come up with a path, not just a few dynamics measured from one.
Does anyone have a few pointers to help a noob?
Currently, testing is done by recording movements and having I and several other developers watch the playback. Ideally the movement will be able to trick both an automatic biometric classifier, as well as a real, live, breathing Homo sapien, too.
Fitt's law gives a very good estimation of the time needed to position the mouse pointer. In the derivation section there is a simple explanation I think you could use this as one of the basic building blocks of your app. Start with big movements, put some inacurracy both in the direction and the length of the movement, then do a smaller correction movement and so on...
First, i guess you record human mouse movements from A to B. Because otherwise, trying to synthesize a model for such movement does not seem possible to me.
Second, how about measuring the deviations from the "direct" path, maybe in relation to time. I actually suspect that movements look different for different angles, path lengths etc., but maybe you can try a normalized model first, that you just stretch (in space and time) and rotate like you need it.
Third, the learning. The easiest thing would be to just have a collection of real moves (in the form i discussed above), and sample from that collection. Evaluate how that looks like. If you really want a probabilistic model, then you have to evaluate what kind of models fit. is it enough to blurr the direct path with gaussian noise whose parameters you learn from your training set? Or some (sin-)wavy deviation? Or seperate models for "getting near the button" and "final corrections". Fitts law might be useful for evaluation.
This question reminded me of a website I knew about years ago, so I visited it and found this in-depth discussion on the topic.
The timing is so similar as to make me think this question is related in some way. In fact, someone in the thread linked to the same article you did. If it's not related, well, there's a link to a lot of people discussing exactly what you're thinking about.
I don't think the problem is all that well defined. There is a important notion not mentioned so far, which is context. The mouse movement on my screen when Chrome has focus is massively different that the motion when Vim has focus.
The way a mouse moves varies based on the type of the device, the type of action, the UI elements involved, familiarity with the UI, the speed at which the user is attempting to complete their task, the skill of the user, initial failure of the user (eg miss-clicks), the user's emotional state (as well as many other factors). Do you plan on creating several pathing strategies to correspond to different contexts? Also how well do you know the algorithm you are trying to fool? I assume not extensively or you would simply program directly against that algorithm.
If a human is looking at the pathing, they might be able to identify the state associated with a pathing strategy and may be more inclined to be fooled if they identify it as a human state (eg user is rushing, miss-clicks, quickly closes a resulting popup, tries again slower). UI comes into play with not just size and position. I often quickly point to a toolbar, then slide across the options until I get to my target. Another example is that I typically pause on menu items while I am scanning for my target or hover over text I am reading. Are you attempting to emulate human behavior or just their mouse movements (because I think they are joined at the hip)?
Are you wanting to simulate human-like mouse movement because you are doing real-time online training for your game? If your training sequences are static, just record your mouse movements and play a mouse clicking sound effect whenever you click the mouse button. No mouse movement is going to feel "real enough" to you more than your own.
Personally, I feel experts in software move their mice too quickly in training videos. I prefer an approach taken by screencast video software I've seen that always moves the mouse linearly from point A --> B. The trick was, every mouse move made in the video always took the same amount of time regardless of distance, say 3/4 of a second and then followed by a mouse click sound effect.
I believe they moved the mouse in this way because then the viewer could anticipate the landing area of the mouse by the direction and velocity the mouse moved at the start. In a training situation, I suppose that regular movements like this are gentler on the eye and perhaps easier to retain/recall.
Have you considered adding mouse tracking to your application so you essentially record how the user moves the mouse and then analyze the recordings?
I have not looked into this recently but I believe that a MouseListener in a Swing application get the information you need.