How to learn mouse movement? - java

I've been attempting to develop a means of synthesizing human-like mouse movement in an application of mine for the past few weeks. At the start I used simple techniques like polynomial and spline interpolation, however even with a little noise the result still failed to appear sufficiently human-like.
In an effort to remedy this issue, I've been researching into ways of applying machine learning algorithms on real human mouse movement biometrics in order to synthesize mouse movements by learning from recorded real human ones. Users would be compiling a profile of recorded movements that would trainh= the program for synthesis purposes.
I've been searching for a few weeks and read several articles on application of inverse biometrics in generating mouse dynamics, such as Inverse Biometrics for Mouse Dynamics; they tend to focus, however, on generating realistic time from randomly-generated dynamics, while I was hoping to generate a path from specifically A to B. Plus, I still need to actually need to come up with a path, not just a few dynamics measured from one.
Does anyone have a few pointers to help a noob?
Currently, testing is done by recording movements and having I and several other developers watch the playback. Ideally the movement will be able to trick both an automatic biometric classifier, as well as a real, live, breathing Homo sapien, too.

Fitt's law gives a very good estimation of the time needed to position the mouse pointer. In the derivation section there is a simple explanation I think you could use this as one of the basic building blocks of your app. Start with big movements, put some inacurracy both in the direction and the length of the movement, then do a smaller correction movement and so on...

First, i guess you record human mouse movements from A to B. Because otherwise, trying to synthesize a model for such movement does not seem possible to me.
Second, how about measuring the deviations from the "direct" path, maybe in relation to time. I actually suspect that movements look different for different angles, path lengths etc., but maybe you can try a normalized model first, that you just stretch (in space and time) and rotate like you need it.
Third, the learning. The easiest thing would be to just have a collection of real moves (in the form i discussed above), and sample from that collection. Evaluate how that looks like. If you really want a probabilistic model, then you have to evaluate what kind of models fit. is it enough to blurr the direct path with gaussian noise whose parameters you learn from your training set? Or some (sin-)wavy deviation? Or seperate models for "getting near the button" and "final corrections". Fitts law might be useful for evaluation.

This question reminded me of a website I knew about years ago, so I visited it and found this in-depth discussion on the topic.
The timing is so similar as to make me think this question is related in some way. In fact, someone in the thread linked to the same article you did. If it's not related, well, there's a link to a lot of people discussing exactly what you're thinking about.

I don't think the problem is all that well defined. There is a important notion not mentioned so far, which is context. The mouse movement on my screen when Chrome has focus is massively different that the motion when Vim has focus.

The way a mouse moves varies based on the type of the device, the type of action, the UI elements involved, familiarity with the UI, the speed at which the user is attempting to complete their task, the skill of the user, initial failure of the user (eg miss-clicks), the user's emotional state (as well as many other factors). Do you plan on creating several pathing strategies to correspond to different contexts? Also how well do you know the algorithm you are trying to fool? I assume not extensively or you would simply program directly against that algorithm.
If a human is looking at the pathing, they might be able to identify the state associated with a pathing strategy and may be more inclined to be fooled if they identify it as a human state (eg user is rushing, miss-clicks, quickly closes a resulting popup, tries again slower). UI comes into play with not just size and position. I often quickly point to a toolbar, then slide across the options until I get to my target. Another example is that I typically pause on menu items while I am scanning for my target or hover over text I am reading. Are you attempting to emulate human behavior or just their mouse movements (because I think they are joined at the hip)?

Are you wanting to simulate human-like mouse movement because you are doing real-time online training for your game? If your training sequences are static, just record your mouse movements and play a mouse clicking sound effect whenever you click the mouse button. No mouse movement is going to feel "real enough" to you more than your own.
Personally, I feel experts in software move their mice too quickly in training videos. I prefer an approach taken by screencast video software I've seen that always moves the mouse linearly from point A --> B. The trick was, every mouse move made in the video always took the same amount of time regardless of distance, say 3/4 of a second and then followed by a mouse click sound effect.
I believe they moved the mouse in this way because then the viewer could anticipate the landing area of the mouse by the direction and velocity the mouse moved at the start. In a training situation, I suppose that regular movements like this are gentler on the eye and perhaps easier to retain/recall.

Have you considered adding mouse tracking to your application so you essentially record how the user moves the mouse and then analyze the recordings?
I have not looked into this recently but I believe that a MouseListener in a Swing application get the information you need.

Related

How to take two time-based inputs from a user without calling wait() or yield()

I'm currently in the process of making a game in Java. I've recently been working on mouse input, and where I'm stuck has to do with the game recognizing when the mouse has moved. I've obviously gotten it to accept mouse input, but I can't make the in-game player rotate correctly based upon mouse position.
What I can come up with to fix it is to take a measurement of the mouse's x position at one point in time, and then compare it to another measurement one millisecond later to see whether or not the mouse has moved, and if so, to get the player to rotate linearly faster with the distance the mouse has translated.
However, I have no idea how to do that without forcing the entire program to wait or yield (which would be bad).
I've tried writing the position to a file and then re-reading it immediately after, but it tends to read from the file at different rates, causing the rotation to be spastic and uneven. But besides that, I'm open to ideas. I can't really post my code because there are many class files that go into the program, but if necessary, I can post the whole project to github for anyone to look at.
GitHub Repository link: https://github.com/isl1/Duplicity
P.S. If it helps, I'm using LWJGL 3, paulscode, and Slick-Util on Eclipse Mars.

Face Features Detection - corner of eyes, eyebrows

I am creating basic emotion detection system for mobile phone with usage of OpenCV4Android. My system is already capable of finding mouth and doing some preprocessing. I have nice results of getting face objects from Canny:
Examplary Face1: https://dl.dropboxusercontent.com/u/108321090/FACE%20%282%29.png
Examplary Face2: https://dl.dropboxusercontent.com/u/108321090/FACE%20%281%29.png
Red rectangles are areas found by cascades. I have those saved as Mat objects.
Blue dots are points I need to find. Problem is, that I have both eyebrows and eyes on the same segment.
Additionaly there are situations in which eyebrows are directly connected to eyes (in some emotion states). It's hard to access some points. I have also normal images (of course) and tresholded ones which are also interesting for eyebrow shapes - but I lose some other objects (mouth - well that one doesn't matter cuz its already done, eyes) due to bad light, well eyebrows are always well visible. Of course I could change tresholding a bit, cuz I dont need it in finding other features. Like I said mouths is done well. Eyes/Eyebrows left.
Examplary Face3: https://dl.dropboxusercontent.com/u/108321090/Screenshot_2014-01-17-01-33-14.png
Examplary Face4: https://dl.dropboxusercontent.com/u/108321090/Screenshot_2014-01-17-01-26-33.png
Examplary Face5 (a bit problematic, eyes gone, but if I treshold them localy not globaly its fine) https://dl.dropboxusercontent.com/u/108321090/Screenshot_2014-03-05-01-30-48.png
Exampalary Face6 (eyebrows conencted to eyes) https://dl.dropboxusercontent.com/u/108321090/Screenshot_2014-03-05-01-28-21.png
I want to ask you if you could provide me with any materials/ideas connected to detection of eye, and eyebrows action units.
if you can locate an eye/eye-brow unit you can probably just track it and relate emotions to the relative motion there rather than trying to separate eyes from eye-brows. Your first two exemplary faces are gradients while the rest are thresholded grey tones. I would rather use gradients since grey tones are affected by lighting and shadows.
I would also avoid using Canny edge detector since it is a highly non-linear and non-stable operator for matching sequential frames and hence for motion detection. I would rather use a simpler Sobel and some kind of motion detection but only after tracking subtracts a global head motion.
The interesting work on emotion detection was done based on Kinect and it really works though it requires a bit of offline training, see faceShift. A good test for right processing (before mapping features to emotions) is trying to move the model of the face in sync with target face - some kind of virtual avatar.

how to detect this specific movement gesture via sensors?

I'm working on an Android project. it's goal is to detect predefined movement gesture of the device. if the device rotates 45 degrees over an axis(X,Y or Z) and then rotates back to its first position( the first and second positions are not going to be accurate, I mean if it was rotated 50 degrees instead of 45, not important !!!)then the gesture has happened and the app should detect it.
I tried to do that using Accelerometer and Magnetic sensors of device to continually monitor the orientation of the device and detect gesture but the results weren't acceptable(explained here). any start point or idea please ?
It doesn't seem like anybody is going to provide a concrete and valuable answer. So let me try to do that.
First of all, even a bit primitive and straightforward approach allows to spot the fact you do not need to process all the data coming from sensors. Moreover humans are not that fast, so there is no need to proceed 10000 values per second in order to identify any specific move as well.
What you actually need is just to identify key points and make your decision. Does it sound like a tanget to you?
What I'm actually suggesting is to test your solution using an ordinary mouse and available gesture recognition framework. Because the actual idea is pretty much the same. So please check:
iGesture - Gesture Recognition Framework
Mouse Gestures
It such a way it might be easier to develop a proper solution.
Update
Let's imagine I'm holding my phone and I need to rotate it 90 degrees counterclockwise and then 180 degrees clockwise. I hope you will not expect me to do some complex 3D shapes in the air (it will break usability and frankly I do not want to loose my phone), so it is possible to say there might be a point we can track or we can easily simulate it.
Please see my other answer in order to see simple, but working solution of a similar problem:

Differentiate Between Robot mouseclick and human mouse click

I have an applet that I'm running online and I want to make sure people can't use the java Robot class to operate the applet. I know that yahoo does this on several of their game platforms and I was wondering if anyone knew how they accomplished it.
Watch mouse movement, and make sure you're not seeing "jumps" from one place to another, but movement over time instead. Sun/Oracle's J2SE tutorials show how to follow mouse movement events: http://download.oracle.com/javase/tutorial/uiswing/events/mousemotionlistener.html
Keep in mind that this would potentially fail to detect the difference between a robot and a person on something like a touch screen, or tablet input device.
One more thing to watch for is whether the user is clicking the same pixel, or just in the same vicinity. Humans are fairly imprecise, robots generally aren't unless programmed to be.
I would also put in a gesture logger for good measure that compiles this information, and keeps track of the actual movements of your users. If you suspect someone of cheating, you can then look at what their actual mouse movements looked like, and compare that with a known person. That will give you a better idea of what you need to look for than any of us can come up with off the tops of our heads.
keep track of the distribution of mouse positions over time. Humans move the mouse differently than a robot that knows exactly where to position it every single time it is clicked. Of course, a smarter robot can counter this defense.

Suitable widget for drawing a 2D cartesian coordinate system in java

I want to build up a GUI area where users can click randomly. Then I want to retrieve the cartesian (x,y) coordinates of those points where the user clicked. Which GUI component is suitable for my task. Also, I want to have any point as big as a usual dot (maybe 3 pixels or more - does this depend on resolution? And if I change the screen resolution of the monitor during the program execution, does the coordinate change?)
I usually work in web programming and I am totally novice in this area. Any pointers to good background reading would help greatly.
Thanks.
http://download.oracle.com/javase/tutorial/uiswing/events/mouselistener.html
Is this what you're searching for?
as for changing resolution,
I believe you need a listener which changes/updates the size, though I'm pretty much not familiar with that, sorry.

Categories