Robot.mouseMove not moving to specified location properly - java

Whenever I run a mouseMove command for a robot, the mouse doesn't always go to the same location. For example, I have the following code:
import java.awt.Robot;
import java.util.concurrent.TimeUnit;
public class MainBot {
public static void main(String[] args){
try {
Robot screenWin = new Robot();
TimeUnit.SECONDS.sleep(2);
screenWin.mouseMove(100, 300);
} catch (Exception e) {
e.printStackTrace();
}
}
}
The code usually makes the mouse end up at the X:
First, I hit run (I am using eclipse) and move my mouse to a location (before the 2 second timer is up). Then the 2 second delay finishes and the mouse moves and then the script ends. The problem is, the mouse never seems to go to the same exact place twice. For example, the mouse should go to (100, 300) but it goes to something that looks like (0, 300) most of the time. Other times, however, if I move the mouse at the beginning to where it should roughly be, then it goes to the right spot.
I am getting where the mouse should be using Paint to get the pixel location of a screenshot but I don't think it is that because the location keeps changing.
Is there anything I'm missing how the coordinates for mouseMove work?
Edit: Basically, I hit start with that program, then I move the mouse to a new position (so there is a different initial position before the mouseMove function) and then mouseMove executes. Each time I do this, the mouse goes to a different location.

There's an open bug on OpenJDK, so this could be related:
https://bugs.openjdk.java.net/browse/JDK-8196030?jql=project%20in%20(JDK)%20AND%20component%20in%20(client-libs)%20AND%20Subcomponent%20in%20(java.awt)
The bug details that a problem may have been introduced in Windows 10 Fall Creators update, related to screen scaling and a mouse_move function.
In the meantime, you could try to set your screen scale to 100% instead of 125% and see if it helps.

I found a solution, you just have to move the mouse to the coordinate (0,0) then you can move it to the place you want.

I wrote a class to do proper cursor positioning.
This works under windows 10 scalings too.
Use the MoveMouseControlled(double, double) function to move the cursor to a specified position. It uses a [0,1] coordinate system. The (0,0) Point is the upper left corner of the screen.
import java.awt.AWTException;
import java.awt.Dimension;
import java.awt.MouseInfo;
import java.awt.Point;
import java.awt.Robot;
import java.awt.Toolkit;
public class MouseCorrectRobot extends Robot
{
final Dimension ScreenSize;// Primary Screen Size
public MouseCorrectRobot() throws AWTException
{
super();
ScreenSize = Toolkit.getDefaultToolkit().getScreenSize();
}
private static double getTav(Point a, Point b)
{
return Math.sqrt((double) ((a.x - b.x) * (a.x - b.x) + (a.y - b.y) * (a.y - b.y)));
}
public void MoveMouseControlled(double xbe, double ybe)// Position of the cursor in [0,1] ranges. (0,0) is the upper left corner
{
int xbepix = (int) (ScreenSize.width * xbe);
int ybepix = (int) (ScreenSize.height * ybe);
int x = xbepix;
int y = ybepix;
Point mert = MouseInfo.getPointerInfo().getLocation();
Point ElozoInitPont = new Point(0, 0);
int UgyanAztMeri = 0;
final int UgyanAZtMeriLimit = 30;
int i = 0;
final int LepesLimit = 20000;
while ((mert.x != xbepix || mert.y != ybepix) && i < LepesLimit && UgyanAztMeri < UgyanAZtMeriLimit)
{
++i;
if (mert.x < xbepix)
++x;
else
--x;
if (mert.y < ybepix)
++y;
else
--y;
mouseMove(x, y);
mert = MouseInfo.getPointerInfo().getLocation();
if (getTav(ElozoInitPont, mert) < 5)
++UgyanAztMeri;
else
{
UgyanAztMeri = 0;
ElozoInitPont.x = mert.x;
ElozoInitPont.y = mert.y;
}
}
}
}

I just had a similar problem, to solve it I’ve just done a loop :
Test position
Move
Test position
if not OK move again
And it always works in less than 2 loops
Point pd = new Point(X,Y); // X,Y where mouse must go
int n = 0;
while ((!pd.equals(MouseInfo.getPointerInfo().getLocation())) && (++n <= 5))
{
r.mouseMove(pd.x, pd.y);
}

It works well (correct location) in Full Screen mode with zoom=100%. press F-11 in chrome to full screen page.

Related

Java: Chess - Moving a piece

I would like to move the piece using my mouse.
For example, say there's a pawn sitting on a square on the chess board.
If my mouse was to click (press and release) in the square that the pawn is in, it would be selected. Afterwards, I would click (press and release) in an appropriate square. Say, the square in front of it since that is a proper move in Chess. The pawn would move (get erased from the square it was on, then redrawn on the new one) to the final selected square.
Currently, the Pawn is just sitting on a square when execution is finished.
If it's any help, I am using Ready To Program Java (my teacher told me to) as well as the c console (c = new Console();).
Here is the source code of what I have done so far :)
import hsa.Console;
import java.awt.*;
public class Chess
{
static Console c;
public static void main(String[] args)
{
c = new Console(30, 100); // Rows, Columns (X = 790px && Y = 600px)
Board();
Pawn();
}
public static void Board() // Board: 504px x 504px Square: 63px x 63px
{
int Horizontal = 143; // Board's origin point (X)
int Vertical = 48; // Board's origin point (Y)
for (Vertical = 48; Vertical < 552; Vertical+=63) // Moving onto the next "line"
{
for (Horizontal = 143; Horizontal < 647; Horizontal+=63) // Filling the "line" with squares
{
c.drawRect(Horizontal, Vertical, 63, 63); // Drawing the Squares
}
Horizontal = 143; // Resetting the "line"
}
}
public static void Pawn() // Image and properties of a PAWN piece
{
c.setColor(Color.red); // How the Pawn looks
c.drawOval(143, 111, 63, 63);
}
}
The Console Java class documentation can be found here:
http://stbenedict.wcdsb.ca/hsa/Console.html
As it specifies, it implements the EventListener interface, which has a subinterface, MouseListener. There are many guides online which document how to handle events sent from the MouseListener. Here is one from Oracle's documentation:
https://docs.oracle.com/javase/tutorial/uiswing/events/mouselistener.html

Ball no longer moves in BallWallBounce (Art and Science of Java Ex4.15)

From the Art and Science of Java Chapter 4, Exercise 15. I am suppose to write a program that animates a ball bouncing within the window from edge to edge.
Here is my code:
import acm.graphics.*;
import acm.program.*;
import java.awt.*;
public class BouncingBall extends GraphicsProgram {
private static final int N_STEPS=1000;
private static final int PAUSE_TIME = 2;
private static final double ovalsize =50;
public void run(){
GOval oval = new GOval(getWidth()/2-ovalsize/2,getHeight()/2-ovalsize/2, ovalsize, ovalsize);
//positions the ball's start position at the center of the window
oval.setFilled(true);
add (oval);
double dx=((getWidth()-ovalsize)/4)/N_STEPS;
double dy=((getHeight()-ovalsize)/2)/N_STEPS;
while(true) {
oval.move(dx, dy); //indicates the oval moving
pause(PAUSE_TIME);
if (oval.getY() > getHeight() - ovalsize) {//code indicates if ball encounters any edge
//of the screen, it will change direction.
dy*=-1;
}
if(oval.getX() > getWidth()- ovalsize) {
dx*=-1;
}
if(oval.getY() < 0) {
dy*=dy-1;
}
if(oval.getX() < 0) {
dx*=-1;
}
}
}
}
I used the code from http://tenasclu.blogspot.co.uk/2012/12/first-few-days-of-learning-to-program.html to help me understand how the ball needs to bounce from one edge of the window back.
After making the program run, I realize that when I change:
double dx=((getWidth()-ovalsize)/4)/N_STEPS;
double dy=((getHeight()-ovalsize)/2)/N_STEPS;
to
double dx = (getWidth()/N_STEPS);
double dy = (getWidth()/N_STEPS);
(which is the code from the other webpage), the ball no longer moves.
Can anyone tell me what's happening?
EDIT: I went back to test the program and it appears there is another problem with the code that might be related to this. When I start the application, the ball will move as normal. Then it will move faster and faster and after about 62 seconds, the ball will bug out and move back and forth only at the top edge of the window.

Image Processing Edge Detection in Java

This is my situation. It involves aligning a scanned image which will account for incorrect scanning. I must align the scanned image with my Java program.
These are more details:
There is a table-like form printed on a sheet of paper, which will be scanned into an image file.
I will open the picture with Java, and I will have an OVERLAY of text boxes.
The text boxes are supposed to align correctly with the scanned image.
In order to align correctly, my Java program must analyze the scanned image and detect the coordinates of the edges of the table on the scanned image, and thus position the image and the textboxes so that the textboxes and the image both align properly (in case of incorrect scanning)
You see, the guy scanning the image might not necessarily place the image in a perfectly correct position, so I need my program to automatically align the scanned image as it loads it. This program will be reusable on many of such scanned images, so I need the program to be flexible in this way.
My question is one of the following:
How can I use Java to detect the y coordinate of the upper edge of the table and the x-coordinate of the leftmost edge of the table. The table is a a regular table with many cells, with black thin border, printed on a white sheet of paper (horizontal printout)
If an easier method exists to automatically align the scanned image in such a way that all scanned images will have the graphical table align to the same x, y coordinates, then share this method :).
If you don't know the answer to the above to questions, do tell me where I should start. I don't know much about graphics java programming and I have about 1 month to finish this program. Just assume that I have a tight schedule and I have to make the graphics part as simple as possible for me.
Cheers and thank you.
Try to start from a simple scenario and then improve the approach.
Detect corners.
Find the corners in the boundaries of the form.
Using the form corners coordinates, calculate the rotation angle.
Rotate/scale the image.
Map the position of each field in the form relative to form origin coordinates.
Match the textboxes.
The program presented at the end of this post does the steps 1 to 3. It was implemented using Marvin Framework. The image below shows the output image with the detected corners.
The program also outputs: Rotation angle:1.6365770416167182
Source code:
import java.awt.Color;
import java.awt.Point;
import marvin.image.MarvinImage;
import marvin.io.MarvinImageIO;
import marvin.plugin.MarvinImagePlugin;
import marvin.util.MarvinAttributes;
import marvin.util.MarvinPluginLoader;
public class FormCorners {
public FormCorners(){
// Load plug-in
MarvinImagePlugin moravec = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.corner.moravec");
MarvinAttributes attr = new MarvinAttributes();
// Load image
MarvinImage image = MarvinImageIO.loadImage("./res/printedForm.jpg");
// Process and save output image
moravec.setAttribute("threshold", 2000);
moravec.process(image, null, attr);
Point[] boundaries = boundaries(attr);
image = showCorners(image, boundaries, 12);
MarvinImageIO.saveImage(image, "./res/printedForm_output.jpg");
// Print rotation angle
double angle = (Math.atan2((boundaries[1].y*-1)-(boundaries[0].y*-1),boundaries[1].x-boundaries[0].x) * 180 / Math.PI);
angle = angle >= 0 ? angle : angle + 360;
System.out.println("Rotation angle:"+angle);
}
private Point[] boundaries(MarvinAttributes attr){
Point upLeft = new Point(-1,-1);
Point upRight = new Point(-1,-1);
Point bottomLeft = new Point(-1,-1);
Point bottomRight = new Point(-1,-1);
double ulDistance=9999,blDistance=9999,urDistance=9999,brDistance=9999;
double tempDistance=-1;
int[][] cornernessMap = (int[][]) attr.get("cornernessMap");
for(int x=0; x<cornernessMap.length; x++){
for(int y=0; y<cornernessMap[0].length; y++){
if(cornernessMap[x][y] > 0){
if((tempDistance = Point.distance(x, y, 0, 0)) < ulDistance){
upLeft.x = x; upLeft.y = y;
ulDistance = tempDistance;
}
if((tempDistance = Point.distance(x, y, cornernessMap.length, 0)) < urDistance){
upRight.x = x; upRight.y = y;
urDistance = tempDistance;
}
if((tempDistance = Point.distance(x, y, 0, cornernessMap[0].length)) < blDistance){
bottomLeft.x = x; bottomLeft.y = y;
blDistance = tempDistance;
}
if((tempDistance = Point.distance(x, y, cornernessMap.length, cornernessMap[0].length)) < brDistance){
bottomRight.x = x; bottomRight.y = y;
brDistance = tempDistance;
}
}
}
}
return new Point[]{upLeft, upRight, bottomRight, bottomLeft};
}
private MarvinImage showCorners(MarvinImage image, Point[] points, int rectSize){
MarvinImage ret = image.clone();
for(Point p:points){
ret.fillRect(p.x-(rectSize/2), p.y-(rectSize/2), rectSize, rectSize, Color.red);
}
return ret;
}
public static void main(String[] args) {
new FormCorners();
}
}
Edge detection is something that is typically done by enhancing the contrast between neighboring pixels, such that you get a easily detectable line, which is suitable for further processing.
To do this, a "kernel" transforms a pixel according it the pixel's inital value, and the value of that pixel's neighbors. A good edge detection kernel will enhance the differences between neighboring pixels, and reduce the strength of a pixel with similar neigbors.
I would start by looking at the Sobel operator. This might not return results that are immediately useful to you; however, it will get you far closer than you would be if you were to approach the problem with little knowledge of the field.
After you have some crisp clean edges, you can use larger kernels to detect points where it seems that a 90% bend in two lines occurs, that might give you the pixel coordinates of the outer rectangle, which might be enough for your purposes.
With those outer coordinates, it still is a bit of math to make the new pixels be composted with the average values between the old pixels rotated and moved to "match". The results (especially if you do not know about anti-aliasing math) can be pretty bad, adding blur to the image.
Sharpening filters might be a solution, but they come with their own issues, mainly they make the picture sharper by adding graininess. Too much, and it is obvious that the original image is not a high-quality scan.
I researched the libraries but in the end I found it more convenient to code up my own edge detection methods.
The class below will detect black/grayed out edges of a scanned sheet of paper that contains such edges, and will return the x and y coordinate of the edges of the sheet of paper, starting from the rightmost end (reverse = true) or from lower end (reverse = true) or from the top edge (reverse = false) or from left edge (reverse = false). Also...the program will take ranges along vertical edges (rangex) measured in pixels, and horizontal ranges (rangey) measured in pixels. The ranges determine outliers in the points received.
The program does 4 vertical cuts using the specified arrays, and 4 horizontal cuts. It retrieves the values of the dark dots. It uses the ranges to eliminate outliers. Sometimes, a little spot on the paper may cause an outlier point. The smaller the range, the fewer the outliers. However, sometimes the edge is slightly tilted, so you don't want to make the range too small.
Have fun. It works perfectly for me.
import java.awt.image.BufferedImage;
import java.awt.Color;
import java.util.ArrayList;
import java.lang.Math;
import java.awt.Point;
public class EdgeDetection {
public App ap;
public int[] horizontalCuts = {120, 220, 320, 420};
public int[] verticalCuts = {300, 350, 375, 400};
public void printEdgesTest(BufferedImage image, boolean reversex, boolean reversey, int rangex, int rangey){
int[] mx = horizontalCuts;
int[] my = verticalCuts;
//you are getting edge points here
//the "true" parameter indicates that it performs a cut starting at 0. (left edge)
int[] xEdges = getEdges(image, mx, reversex, true);
int edgex = getEdge(xEdges, rangex);
for(int x = 0; x < xEdges.length; x++){
System.out.println("EDGE = " + xEdges[x]);
}
System.out.println("THE EDGE = " + edgex);
//the "false" parameter indicates you are doing your cut starting at the end (image.getHeight)
//and ending at 0
//if the parameter was true, it would mean it would start the cuts at y = 0
int[] yEdges = getEdges(image, my, reversey, false);
int edgey = getEdge(yEdges, rangey);
for(int y = 0; y < yEdges.length; y++){
System.out.println("EDGE = " + yEdges[y]);
}
System.out.println("THE EDGE = " + edgey);
}
//This function takes an array of coordinates...detects outliers,
//and computes the average of non-outlier points.
public int getEdge(int[] edges, int range){
ArrayList<Integer> result = new ArrayList<Integer>();
boolean[] passes = new boolean[edges.length];
int[][] differences = new int[edges.length][edges.length-1];
//THIS CODE SEGMENT SAVES THE DIFFERENCES BETWEEN THE POINTS INTO AN ARRAY
for(int n = 0; n<edges.length; n++){
for(int m = 0; m<edges.length; m++){
if(m < n){
differences[n][m] = edges[n] - edges[m];
}else if(m > n){
differences[n][m-1] = edges[n] - edges[m];
}
}
}
//This array determines which points are outliers or nots (fall within range of other points)
for(int n = 0; n<edges.length; n++){
passes[n] = false;
for(int m = 0; m<edges.length-1; m++){
if(Math.abs(differences[n][m]) < range){
passes[n] = true;
System.out.println("EDGECHECK = TRUE" + n);
break;
}
}
}
//Create a new array only using valid points
for(int i = 0; i<edges.length; i++){
if(passes[i]){
result.add(edges[i]);
}
}
//Calculate the rounded mean... This will be the x/y coordinate of the edge
//Whether they are x or y values depends on the "reverse" variable used to calculate the edges array
int divisor = result.size();
int addend = 0;
double mean = 0;
for(Integer i : result){
addend += i;
}
mean = (double)addend/(double)divisor;
//returns the mean of the valid points: this is the x or y coordinate of your calculated edge.
if(mean - (int)mean >= .5){
System.out.println("MEAN " + mean);
return (int)mean+1;
}else{
System.out.println("MEAN " + mean);
return (int)mean;
}
}
//this function computes "dark" points, which include light gray, to detect edges.
//reverse - when true, starts counting from x = 0 or y = 0, and ends at image.getWidth or image.getHeight()
//verticalEdge - determines whether you want to detect a vertical edge, or a horizontal edge
//arr[] - determines the coordinates of the vertical or horizontal cuts you will do
//set the arr[] array according to the graphical layout of your scanned image
//image - this is the image you want to detect black/white edges of
public int[] getEdges(BufferedImage image, int[] arr, boolean reverse, boolean verticalEdge){
int red = 255;
int green = 255;
int blue = 255;
int[] result = new int[arr.length];
for(int n = 0; n<arr.length; n++){
for(int m = reverse ? (verticalEdge ? image.getWidth():image.getHeight())-1:0; reverse ? m>=0:m<(verticalEdge ? image.getWidth():image.getHeight());){
Color c = new Color(image.getRGB(verticalEdge ? m:arr[n], verticalEdge ? arr[n]:m));
red = c.getRed();
green = c.getGreen();
blue = c.getBlue();
//determine if the point is considered "dark" or not.
//modify the range if you want to only include really dark spots.
//occasionally, though, the edge might be blurred out, and light gray helps
if(red<239 && green<239 && blue<239){
result[n] = m;
break;
}
//count forwards or backwards depending on reverse variable
if(reverse){
m--;
}else{
m++;
}
}
}
return result;
}
}
A similar such problem I've done in the past basically figured out the orientation of the form, re-aligned it, re-scaled it, and I was all set. You can use the Hough transform to to detect the angular offset of the image (ie: how much it is rotated), but you still need to detect the boundaries of the form. It also had to accommodate for the boundaries of the piece of paper itself.
This was a lucky break for me, because it basically showed a black and white image in the middle of a big black border.
Apply an aggressive, 5x5 median filter to remove some noise.
Convert from grayscale to black and white (rescale intensity values from [0,255] to [0,1]).
Calculate the Principal Component Analysis (ie: calculate the Eigenvectors of the covariance matrix for your image from the calculated Eigenvalues) (http://en.wikipedia.org/wiki/Principal_component_analysis#Derivation_of_PCA_using_the_covariance_method)
4) This gives you a basis vector. You simply use that to re-orient your image to a standard basis matrix (ie: [1,0],[0,1]).
Your image is now aligned beautifully. I did this for normalizing the orientation of MRI scans of entire human brains.
You also know that you have a massive black border around the actual image. You simply keep deleting rows from the top and bottom, and both sides of the image until they are all gone. You can temporarily apply a 7x7 median or mode filter to a copy of the image so far at this point. It helps rule out too much border remaining in the final image from thumbprints, dirt, etc.

Java, why do my graphics draw out of the the frame/range they should be in?

So, I am making a program that currently draws filled circles of a random size (between 6-9 inclusive) randomly on a JFrame of size 1024x768. The problem I am having, is that even after I coded in a rule that should ensure that all the circles fall within the 1024x768 JFrame, the circles fall outside of the desired boundaries. Below is the code segment that should generate the correct location for each circle:
private static KillZoneLocation generateLocation(){
int genX,genY;
int xmax = 1024 - generatedGraphic.getRadius();
int ymax = 768 - generatedGraphic.getRadius();
KillZoneLocation location = new KillZoneLocation();
do{
genX = generatedGraphic.getRadius() + (int)(Math.random()*xmax);
genY = generatedGraphic.getRadius() +(int)(Math.random()*ymax);
location.setXcoord(genX);
location.setYcoord(genY);
generatedLocation = location;
}while(isOverlaping(location));
return location;
}
generatedGraphic is a global variable from the class containing the method above and returns a number between 6 and 9 inclusive
generatedGraphic.getRadius() returns a random number from this algorithm int radius = 7 + (int)(Math.random()*9); The number has been generated prior by a different method. This method is just a getter. The radius number is not generated every time this method is called.
isOverlaping(locations) just checks to make sure the circle does not overlap another circle that is already placed on the JFrame.
location.set... Those are just setter methods.
I'm thinking this is just a silly logic error, but I still can't seem to figure out why the circles are printing outside of the frame.
I am purposely avoiding posting anymore code because it will confuse you since the program has a much larger scope then I have described and there are a dozen files all interwinded. I debugged this code and realized numbers being returned by:
genX = generatedGraphic.getRadius() + (int)(Math.random()*xmax);
genY = generatedGraphic.getRadius() +(int)(Math.random()*ymax);
Return numbers out of range.
Draw class:
import java.awt.*;
import java.awt.event.*;
import java.awt.geom.*;
import javax.swing.*;
public class KillZoneGUI extends JFrame{
public KillZoneGUI(){
setSize(1024,768);
setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
setLocationRelativeTo(null);
setVisible(true);
}
public static void main(String s[]) {
GenerateKillZone.setup(1024,768);
new KillZoneGUI();
}
public void paint(Graphics g){
for(Robot r: KillZone.getRobots()){
g.setColor(r.getGraphic().getColor());
g.fillOval(
r.getLocation().getXcoord(),
r.getLocation().getYcoord(),
r.getGraphic().getRadius(),
r.getGraphic().getRadius());
}
}
}
The correct code should read
genX = (int)(Math.random()*xmax);
genY = (int)(Math.random()*ymax);
Remember that Graphics2D.fillOval() will use genX/genY for the top left corner, and the oval will extend by the value of getRadius(). You're subtracting the size of your radius, but then adding it back in twice! Once in your genX/Y assignments, and once when you draw the oval.
int xmax = 1024 - 2 * generatedGraphic.getRadius();
int ymax = 768 - 2 * generatedGraphic.getRadius();
As you are starting from generatedGraphic.getRadius(); and may only go to xmax/ymax.
And then use the answer of #JasonNichols, fillOval starting with (left, top) (0, 0) to width - diameter.

Java Robot class simulating human mouse movement

I am working on a project about remote control, send conrdinate x and y of cursor from client to server.
But
robot.mouseMove(x,y);
will only move the cursor to the particular point without moving the cursor form origional point
I have find this simple algorthim to simulate the continuing movement of mouse
for (int i=0; i<100; i++){
int x = ((end_x * i)/100) + (start_x*(100-i)/100);
int y = ((end_y * i)/100) + (start_y*(100-i)/100);
robot.mouseMove(x,y);
}
But this algorthim still too simple, it just move from one point to other point slowly, which still unlike human behave.
I have read some open soruce code about remote control from web, and I find this project
http://code.google.com/p/java-remote-control/
is using the method call MosueMovement from MouseListener class, which they use to perform the "dragging".
I like to know is any one know the better way of doing this?
There are a few things to consider if you want to make the artificial movement natural, I think:
Human mouse movement is usually in a slight arc because the mouse hand pivots around the wrist. Also that arc is more pronounced for horizontal movements than vertical.
Humans tend to go in the general direction, often overshoot the target and then go back to the actual target.
Initial speed towards the target is quite fast (hence the aforementioned overshoot) and then a bit slower for precise targeting. However, if the cursor is close to the target initially the quick move towards it doesn't happen (and neither does the overshoot).
This is a bit complex to formulate in algorithms, though.
For anyone in the future: I developed a library for Java, that mimics human mouse movement. The noise/jaggedness in movement, sinusoidal arcs, overshooting the position a bit, etc. Plus the library is written with extension and configuration possibilities in mind, so anyone can fine tune it, if the default solution is not matching the case. Available from Maven Central now.
https://github.com/JoonasVali/NaturalMouseMotion
Take a look in this example that I wrote. You can improve this to simulate what Joey said. I wrote it very fast and there are lots of things that can be improved (algorithm and class design). Note that I only deal with left to right movements.
import java.awt.AWTException;
import java.awt.MouseInfo;
import java.awt.Point;
import java.awt.Robot;
public class MouseMoving {
public static void main(String[] args) {
new MouseMoving().execute();
}
public void execute() {
new Thread( new MouseMoveThread( 100, 50, 50, 10 ) ).start();
}
private class MouseMoveThread implements Runnable {
private Robot robot;
private int startX;
private int startY;
private int currentX;
private int currentY;
private int xAmount;
private int yAmount;
private int xAmountPerIteration;
private int yAmountPerIteration;
private int numberOfIterations;
private long timeToSleep;
public MouseMoveThread( int xAmount, int yAmount,
int numberOfIterations, long timeToSleep ) {
this.xAmount = xAmount;
this.yAmount = yAmount;
this.numberOfIterations = numberOfIterations;
this.timeToSleep = timeToSleep;
try {
robot = new Robot();
Point startLocation = MouseInfo.getPointerInfo().getLocation();
startX = startLocation.x;
startY = startLocation.y;
} catch ( AWTException exc ) {
exc.printStackTrace();
}
}
#Override
public void run() {
currentX = startX;
currentY = startY;
xAmountPerIteration = xAmount / numberOfIterations;
yAmountPerIteration = yAmount / numberOfIterations;
while ( currentX < startX + xAmount &&
currentY < startY + yAmount ) {
currentX += xAmountPerIteration;
currentY += yAmountPerIteration;
robot.mouseMove( currentX, currentY );
try {
Thread.sleep( timeToSleep );
} catch ( InterruptedException exc ) {
exc.printStackTrace();
}
}
}
}
}

Categories