I'm having issues with a java class project. The first step consist in drawing a pattern, so I thought well, this can't be hard. And it isn't, but there one thing that really bothers me. First, check the screenshot below :
Screenshot
My problem is that this was done without releasing the mouse, so the drawing should be continuous. Instead of this, there's holes in it. I'm thinking this is related to the way mouse events are transmitted, But I have no idea how to tweak this.
The drawing zone is a JPanel. There is a set of Points that is used to paint the container on mouse events. Pressing add the first point, dragging the others, released clear the drawing zone.
Hope I was specific enough. Thanks for your time!
edit : Forgot the code. http://pastebin.com/RyXiGsvm
StanislavL's right that mouseMove/mouseDrag events are not generated for every pixel you move the mouse cursor over. Why don't you want to use lines? If the issue is that the result is ugly and jagged, you might consider using cubic splines instead. GeneralPath.curveTo provides an easy way to do this. If getting the control points right is a pain, you can also use GeneralPath.quadTo; this is a quadratic approximation that won't look quite as good, but you can easily just pass in the last three points from mouseDrags.
I guess you store mouse points obtained in mouseDrag processing. Mouse drag happens from after some time interval so if you move mouse relatively fast you just got points. TO drai it you can use drawLine() passing pairs of point. So you'll have lines.
Related
I'm writing code which moves the mouse in an applet by sending MouseEvent objects for it to process. In order for my code to move the mouse from one location to another, I need to generate points to fill the path so that my mouse can move through them. However, in order to create the right amount of points (i.e., to mimic movement as if done by physically moving the mouse), I need to determine the physical mouse's polling rate so I know how often it tells my machine about its position.
I looked around for ways to retrieve this value, but the best that I found was the MouseInfo class, but all that it tells me is the number of buttons on the mouse and some information about its pointer - not what I'm looking for. Does anyone know a way (preferably without some sort of external dependency) to read the physical mouse's polling rate?
I'm not sure if there's a solution for this in the API, but I suggest setting up a mouseListener and capture timestamps with System.currentTimeMillis() or System.nanoTime(), then just wave the mouse around for a bit and measure the time between events fireing. While you're moving the mouse, the MouseEvents should fire as fast as the mouse is polled. I think.
I need your help.
I am trying to implement a paint system with live-tracking, you may think: “Live-tracking? What do you mean?”
So I am implementing a solution to track each touch movement on the display, so I can decide, if I want to paint on a specific spot or not, this decision must be in real-time, right when the user touches the display.
There will be some images where the user should paint the inner white space inside of the respective shape; figure 1 shows an example of this shape.
The user should paint the inside of the shape in a specific direction/order; I still don’t know how I can configure this order/direction in the image. An example of a specific order would be like the one showing on figure 2.
So I want the user to paint starting from the zone 1 to the zone 6, do you understand? I don’t want the user to be able to paint the zone 1 and then zone 3, without painting the zone 2 first…
In the end, I would have the shape filled, like the figure 3 shows. The arrows are there just to show the right direction of drawing.
It’s not just about direction, the specific spots on the image matter, there will be different types of images, and I can’t use only the direction.
How can I determine the specific spots in an image with a specific order to be painted?
The only idea that came to my mind, would be some kind of training mode, where I (the developer) would draw the image with the order I want, and the system behind would create virtual points on the code, with a small distance between each of them. Then I would have an array with all the “spots” saved with the order I draw them, do you understand?
I don’t know if this will work, and if it will be fast enough to travel my array of “spots” when the user is trying to draw on the image? Because I want this to work on real-time, the system should check if it’s a valid spot to draw right when the user touches the display…
Do you think it will work? Do you have some recommendations about this? What are the best structures and stuff like that? I am not asking for written code, I just need some guidelines.
I think I explained my problem well, if you have any question please ask me.
Thank you very much in advance!
if all your shapes are lines you can treat the points you want your users to "touch" as squares, every time an ACTION_MOVE event is happening, check if the finger is touching the next point. You can make the size of the squares whatever suits you, like 48dp or 92dp or the width/height of the shape you are drawing.
You said you don't want any code so I assume you know how to do this (if not reply and I will post code too).
Also you didn't mention what happens if the user goes out of the shape, if you want the user to continue drawing until they lift their finger then this way is good, when the ACTION_UP event happens check if the last point is touched (assuming it can only be touched if all others are touched like explained above).
Here is an image if you'd like to have a visual guide:
Obviously if you it this way you have to calculate the positions of each point you will have, I doubt there is any way to automate this.
If you need any more help or you need some code please reply and I will post some! :D
I'm using XYPointerAnnotations on a chart, which are great, I'm just wondering if it's at all possible to change the arrow at the end of the pointer to another shape or to just remove the arrow. Maybe there's a different kind of annotation I could use? I'm not sure.
After quite a bit of searching and experimenting, I've found no evidence that the arrow pointer can be changed for the XYPointerAnnotation. Using XYTextAnnotations and rotating them combined with drawing your own lines and shapes can give similar results, but this is more involved and not as simple as making an XYPointerAnnotation. I've elected just to keep the pointer arrows in my chart.
How do you get the sensitivity of a mouse, change it, and then apply it to the mouse?
-Progress removed, showed the speed of clicking instead of the speed of moving-
I have researched this "everywhere", but there is nothing on this subject.
First of all I think arg0.getXOnScreen will give You the absolute x coordinate of the mouse, not the old position as You're assuming by defining variable named oldX. getX should give you the position within the panel or (sth like widget i do not know the api you are using). The second thing is... what do You mean 'sensitivity of mouse' Do you want to change global system settings for mouse from java ? I do not think it is even possible. Look here this will require You to add jni lib to project and invoke some native libs, so You make your code platform dependent.
You would probably have to tinker with the Robot class and listnening for mouse events. So you'd have to listen for mouse pressed events and after that use robot to move the mouse 3x more pixels than the mouse is actually moving, then you'd have to do a mouse up event via the Robot class, reposition the mouse to the original position followed by a mouse down event via the robot.
See how this can be problematic? This would be extremely use case driven and not something to do generically. I've been doing Java a long time so I could probably pull it off but it is not something a novice could probably do because other issues would come up that need resolution during the debugging process.
Noticed this thread when dealing with a 3d API that rotates the view WAY TO SLOW.
I've been attempting to develop a means of synthesizing human-like mouse movement in an application of mine for the past few weeks. At the start I used simple techniques like polynomial and spline interpolation, however even with a little noise the result still failed to appear sufficiently human-like.
In an effort to remedy this issue, I've been researching into ways of applying machine learning algorithms on real human mouse movement biometrics in order to synthesize mouse movements by learning from recorded real human ones. Users would be compiling a profile of recorded movements that would trainh= the program for synthesis purposes.
I've been searching for a few weeks and read several articles on application of inverse biometrics in generating mouse dynamics, such as Inverse Biometrics for Mouse Dynamics; they tend to focus, however, on generating realistic time from randomly-generated dynamics, while I was hoping to generate a path from specifically A to B. Plus, I still need to actually need to come up with a path, not just a few dynamics measured from one.
Does anyone have a few pointers to help a noob?
Currently, testing is done by recording movements and having I and several other developers watch the playback. Ideally the movement will be able to trick both an automatic biometric classifier, as well as a real, live, breathing Homo sapien, too.
Fitt's law gives a very good estimation of the time needed to position the mouse pointer. In the derivation section there is a simple explanation I think you could use this as one of the basic building blocks of your app. Start with big movements, put some inacurracy both in the direction and the length of the movement, then do a smaller correction movement and so on...
First, i guess you record human mouse movements from A to B. Because otherwise, trying to synthesize a model for such movement does not seem possible to me.
Second, how about measuring the deviations from the "direct" path, maybe in relation to time. I actually suspect that movements look different for different angles, path lengths etc., but maybe you can try a normalized model first, that you just stretch (in space and time) and rotate like you need it.
Third, the learning. The easiest thing would be to just have a collection of real moves (in the form i discussed above), and sample from that collection. Evaluate how that looks like. If you really want a probabilistic model, then you have to evaluate what kind of models fit. is it enough to blurr the direct path with gaussian noise whose parameters you learn from your training set? Or some (sin-)wavy deviation? Or seperate models for "getting near the button" and "final corrections". Fitts law might be useful for evaluation.
This question reminded me of a website I knew about years ago, so I visited it and found this in-depth discussion on the topic.
The timing is so similar as to make me think this question is related in some way. In fact, someone in the thread linked to the same article you did. If it's not related, well, there's a link to a lot of people discussing exactly what you're thinking about.
I don't think the problem is all that well defined. There is a important notion not mentioned so far, which is context. The mouse movement on my screen when Chrome has focus is massively different that the motion when Vim has focus.
The way a mouse moves varies based on the type of the device, the type of action, the UI elements involved, familiarity with the UI, the speed at which the user is attempting to complete their task, the skill of the user, initial failure of the user (eg miss-clicks), the user's emotional state (as well as many other factors). Do you plan on creating several pathing strategies to correspond to different contexts? Also how well do you know the algorithm you are trying to fool? I assume not extensively or you would simply program directly against that algorithm.
If a human is looking at the pathing, they might be able to identify the state associated with a pathing strategy and may be more inclined to be fooled if they identify it as a human state (eg user is rushing, miss-clicks, quickly closes a resulting popup, tries again slower). UI comes into play with not just size and position. I often quickly point to a toolbar, then slide across the options until I get to my target. Another example is that I typically pause on menu items while I am scanning for my target or hover over text I am reading. Are you attempting to emulate human behavior or just their mouse movements (because I think they are joined at the hip)?
Are you wanting to simulate human-like mouse movement because you are doing real-time online training for your game? If your training sequences are static, just record your mouse movements and play a mouse clicking sound effect whenever you click the mouse button. No mouse movement is going to feel "real enough" to you more than your own.
Personally, I feel experts in software move their mice too quickly in training videos. I prefer an approach taken by screencast video software I've seen that always moves the mouse linearly from point A --> B. The trick was, every mouse move made in the video always took the same amount of time regardless of distance, say 3/4 of a second and then followed by a mouse click sound effect.
I believe they moved the mouse in this way because then the viewer could anticipate the landing area of the mouse by the direction and velocity the mouse moved at the start. In a training situation, I suppose that regular movements like this are gentler on the eye and perhaps easier to retain/recall.
Have you considered adding mouse tracking to your application so you essentially record how the user moves the mouse and then analyze the recordings?
I have not looked into this recently but I believe that a MouseListener in a Swing application get the information you need.