We were asked to do the n-queens problem in class, and I came across this bit of code online. The deadline for our submission has already passed, and I turned in a solution that makes use of arrays, but this code interested me, as it required significantly less lines than my solution. I'm not quite sure what is happening in the else statement, so if someone could explain, I would be greatly appreciative! Thanks in advance!
import java.util.Scanner;
public class NQueens {
private static int size; //n
private static int mask; //
private static int count; //solutions
//Uses recursion to calculate the number of possible solutions, and increments "count".
public static void backtrack(int y, int left, int down, int right) {
int bitmap;
int bit;
if (y == size) {
count++;
}
else {
bitmap = mask & ~(left | down | right);
while (bitmap != 0) {
bit = -bitmap & bitmap;
bitmap ^= bit;
backtrack(y + 1, (left | bit) << 1, down | bit, (right | bit) >> 1);
}
}
}
//main
public static void main(String[] args) {
Scanner keyboard = new Scanner(System.in);
System.out.print("Enter the number of queens: ");
size = keyboard.nextInt();
count = 0;
mask = (1 << size) - 1;
backtrack(0, 0, 0, 0);
System.out.println("The valid number of arrangements is " + count);
}
}
I'll give it here in loose terms with signposts to the details.
What is the overall approach?
As the method name hints, backtrack implements a "backtracking" search for solutions. https://en.wikipedia.org/wiki/Backtracking That means that it drives down every possible path, making a decision about each branch whether the quest is still possible, abandoning any path the instant it is proven not to be viable, and backtracking to the most recent decision point to try another path. Quoting that Wikipedia article, regarding the N-queens problem, "In the common backtracking approach, the partial candidates are arrangements of k queens in the first k rows of the board, all in different rows and columns [and diagonals -Ed.]. Any partial solution that contains two mutually attacking queens can be abandoned."
By "partial candidate" we mean a sequence of placements of each queen starting with k == 0 (solution still possible no matter what the next choice is), then 1 (fewer solutions possible because some choices put queens in attacking positions), then 2, and so on until N. With each placement you put the queen in a new row, because obviously any previous row is not a viable choice.
What is a "placement" in the algorithm?
To model placement of a queen in an NxN chess board, you need a data structure to represent that board, and whether a square is occupied, and whether two occupied squares are in a mutual attack relationship.
The data structure in the example is a bitmap. Here's where it gets tricky. You need to be familiar with bit manipulation to follow it.
private static int size; //n
private static int mask;
private static int count; //solutions
size is the number of queens, equal to the number of rows occupied.
count is the number of solutions found
mask is a sequence of size consecutive 1 bits, used to mask off int values to the size of the problem. In the eight-queens example, it will equal 0xff, or 0b1111_1111.
backtrack(int y, int left, int down, int right)
y is easy, it's the current number of queens placed so far, equivalently, the number of rows that have queens so far. The other three values use bit-operation trickery to reveal whether there are attack vectors computable in three directions. This is where it gets murky. I haven't gone all the way through it but I'll indicate how to proceed to full understanding.
bitmap = mask & ~(left | down | right);
Applies the OR operation between the arguments, and bit-flips the result.
bit = -bitmap & bitmap;
Takes the two's complement of the current value of bitmap (which will not be 0 here), and masks that against the original value.
bitmap ^= bit;
Applies the XOR operation to bitmap from the bit variable, which flips any bit in bitmap that has a 1 in the corresponding position in bit.
backtrack(y + 1, (left | bit) << 1, down | bit, (right | bit) >> 1);
Applies the recursion to the next queen (row), setting the new left to the old one merged with bit and shifted left to indicate looking at a new file ("file" in the chess sense). It shifts the right | bit merge to the right one to indicate a new file, and it leaves the down | bit merge indicating the current file.
Loosely the result of this is to zero out the positions that have mutual attack vectors. Every different combination of file placement is tried except ones that reach full 0 before all the queens have been placed.
Exactly how those bits indicate attack vectors is left as an exercise. How they migrate around the size-bit field is a matter of pencil-and-paper tracking the loop line by line.
EDIT: I didn't mention it, but this algorithm handles the diagonals, as is implicit in the rules.
EDIT: Results from a sample run of a version of the program:
size, solutions, backtracks, millisec
0, 1, 1, 0
1, 1, 2, 0
2, 0, 3, 0
3, 0, 6, 0
4, 2, 17, 0
5, 10, 54, 0
6, 4, 153, 0
7, 40, 552, 0
8, 92, 2057, 0
9, 352, 8394, 0
10, 724, 35539, 1
11, 2680, 166926, 0
12, 14200, 856189, 16
13, 73712, 4674890, 116
14, 365596, 27358553, 702
15, 2279184, 171129072, 4318
16, 14772512, 1141190303, 30321
17, 95815104, 8017021932, 208300
Related
I need a fast way to find maximum value when intervals are overlapping, unlike finding the point where got overlap the most, there is "order". I would have int[][] data that 2 values in int[], where the first number is the center, the second number is the radius, the closer to the center, the larger the value at that point is going to be. For example, if I am given data like:
int[][] data = new int[][]{
{1, 1},
{3, 3},
{2, 4}};
Then on a number line, this is how it's going to looks like:
x axis: -2 -1 0 1 2 3 4 5 6 7
1 1: 1 2 1
3 3: 1 2 3 4 3 2 1
2 4: 1 2 3 4 5 4 3 2 1
So for the value of my point to be as large as possible, I need to pick the point x = 2, which gives a total value of 1 + 3 + 5 = 9, the largest possible value. It there a way to do it fast? Like time complexity of O(n) or O(nlogn)
This can be done with a simple O(n log n) algorithm.
Consider the value function v(x), and then consider its discrete derivative dv(x)=v(x)-v(x-1). Suppose you only have one interval, say {3,3}. dv(x) is 0 from -infinity to -1, then 1 from 0 to 3, then -1 from 4 to 6, then 0 from 7 to infinity. That is, the derivative changes by 1 "just after" -1, by -2 just after 3, and by 1 just after 6.
For n intervals, there are 3*n derivative changes (some of which may occur at the same point). So find the list of all derivative changes (x,change), sort them by their x, and then just iterate through the set.
Behold:
intervals = [(1,1), (3,3), (2,4)]
events = []
for mid, width in intervals:
before_start = mid - width - 1
at_end = mid + width
events += [(before_start, 1), (mid, -2), (at_end, 1)]
events.sort()
prev_x = -1000
v = 0
dv = 0
best_v = -1000
best_x = None
for x, change in events:
dx = x - prev_x
v += dv * dx
if v > best_v:
best_v = v
best_x = x
dv += change
prev_x = x
print best_x, best_v
And also the java code:
TreeMap<Integer, Integer> ts = new TreeMap<Integer, Integer>();
for(int i = 0;i<cows.size();i++) {
int index = cows.get(i)[0] - cows.get(i)[1];
if(ts.containsKey(index)) {
ts.replace(index, ts.get(index) + 1);
}else {
ts.put(index, 1);
}
index = cows.get(i)[0] + 1;
if(ts.containsKey(index)) {
ts.replace(index, ts.get(index) - 2);
}else {
ts.put(index, -2);
}
index = cows.get(i)[0] + cows.get(i)[1] + 2;
if(ts.containsKey(index)) {
ts.replace(index, ts.get(index) + 1);
}else {
ts.put(index, 1);
}
}
int value = 0;
int best = 0;
int change = 0;
int indexBefore = -100000000;
while(ts.size() > 1) {
int index = ts.firstKey();
value += (ts.get(index) - indexBefore) * change;
best = Math.max(value, best);
change += ts.get(index);
ts.remove(index);
}
where cows is the data
Hmmm, a general O(n log n) or better would be tricky, probably solvable via linear programming, but that can get rather complex.
After a bit of wrangling, I think this can be solved via line intersections and summation of function (represented by line segment intersections). Basically, think of each as a triangle on top of a line. If the inputs are (C,R) The triangle is centered on C and has a radius of R. The points on the line are C-R (value 0), C (value R) and C+R (value 0). Each line segment of the triangle represents a value.
Consider any 2 such "triangles", the max value occurs in one of 2 places:
The peak of one of the triangle
The intersection point of the triangles or the point where the two triangles overall. Multiple triangles just mean more possible intersection points, sadly the number of possible intersections grows quadratically, so O(N log N) or better may be impossible with this method (unless some good optimizations are found), unless the number of intersections is O(N) or less.
To find all the intersection points, we can just use a standard algorithm for that, but we need to modify things in one specific way. We need to add a line that extends from each peak high enough so it would be higher than any line, so basically from (C,C) to (C,Max_R). We then run the algorithm, output sensitive intersection finding algorithms are O(N log N + k) where k is the number of intersections. Sadly this can be as high as O(N^2) (consider the case (1,100), (2,100),(3,100)... and so on to (50,100). Every line would intersect with every other line. Once you have the O(N + K) intersections. At every intersection, you can calculate the the value by summing the of all points within the queue. The running sum can be kept as a cached value so it only changes O(K) times, though that might not be posible, in which case it would O(N*K) instead. Making it it potentially O(N^3) (in the worst case for K) instead :(. Though that seems reasonable. For each intersection you need to sum up to O(N) lines to get the value for that point, though in practice, it would likely be better performance.
There are optimizations that could be done considering that you aim for the max and not just to find intersections. There are likely intersections not worth pursuing, however, I could also see a situation where it is so close you can't cut it down. Reminds me of convex hull. In many cases you can easily reduce 90% of the data, but there are cases where you see the worst case results (every point or almost every point is a hull point). For example, in practice there are certainly causes where you can be sure that the sum is going to be less than the current known max value.
Another optimization might be building an interval tree.
How do you find abrupt change in an array? For example, if you have following array:
1,3,8,14,58,62,69
In this case, there is a jump from 14 to 58
OR
79,77,68,61,9,3,1
In this case, there is a drop from 61 to 9
In both examples, there are small and big jumps. For example, in 2nd case, there is a small drop from 77 to 68. However, this must be ignored if a larger jump/drop is found. I have following algorithm in my mind but I am not sure if this will cover all possible cases:
ALGO
Iterate over array
Diff (i+1)-i
store first difference in a variable
if next diff is bigger than previous then overwrite
For the following example, this algo will not work for the following case:
1, 2, 4, 6, 34, 38, 41, 67, 69, 71
There are two jumps in this array. So it should be arranged like
[1, 2, 4, 6], [34, 38, 41], [67, 69, 71]
In the end, this is pure statistics. You have a data set; and you are look for a certain forms of outliers. In that sense, your requirement to detect "abrupt changes" is not very precise.
I think you should step back here; and have a deeper look into the mathematics behind your problem - to come up with clear "semantics" and crisp definitions for your actual problem (for example based on average, deviation, etc.). The wikipedia link I gave above should be a good starting point for that part.
From there on, to get to an Java implementation, you might start looking here.
I would look into using a Moving Average, this involves looking at an average for the last X ammount of values. Do this based on the change in value (Y1 - Y2). Any large deviations from the average could be seen as a big shift.
However given how small your datasets are a moving average would likely yeild bad results. With such a small sample size it might be better to take an average of all values in the array instead:
double [] nums = new double[] {79,77,68,61,9,3,1};
double [] deltas = new double[nums.length-1];
double advDelta = 0;
for(int i=0;i<nums.length-1;i++) {
deltas[i] = nums[i+1]-nums[i];
advDelta += deltas[i] / deltas.length;
}
// search for deltas > average
for(int i=0;i<deltas.length;i++) {
if(Math.abs(deltas[i]) > Math.abs(advDelta)) {
System.out.println("Big jump between " + nums[i] + " " + nums[i+1]);
}
}
This problem doesn't have an absolute solution, you'll have to determine thresholds for the context in which the solution is to be applied.
No algorithm can give us the rule for the jump. We as humans are able to determine these changes because we are able to see the entire data at one glance for now. But if data set is large enough then it would be difficult for us to say which jumps are to be considered. For example if on average differences between consecutive numbers are 10 then any difference above that would be considered a jump. However in a large data set there could be differences which are sort of spikes or which start a new normal difference like from 10 to differences suddenly become 100. We will have to decide if we want to get the jumps based on the difference average 10 or 100.
If we are interested in local spike only then it's possible to use moving average as suggested by #ug_
However moving average has to be moving, meaning we maintain a set of local numbers with a fixed set size. On that we calculate the average of the differences and then compare them to the local differences.
However here also we again face the problem to determine the size of the local set. This threshold determines the granularity of the jumps that we capture. A very large set will tend to ignore the closer jumps and a smaller one will tend to provide false positives.
Following a simple solution where you can try setting the thresholds. Local set size in this case is 3, that's the minimum that can be used as it will give us minimum count of differences required that is 2.
public class TestJump {
public static void main(String[] args) {
int[] arr = {1, 2, 4, 6, 34, 38, 41, 67, 69, 71};
//int[] arr = {1,4,8,13,19,39,60,84,109};
double thresholdDeviation = 50; //percent jump to detect, set for your reuirement
double thresholdDiff = 3; //Minimum difference between consecutive differences to avoid false positives like 1,2,4
System.out.println("Started");
for(int i = 1; i < arr.length - 1; i++) {
double diffPrev = Math.abs(arr[i] - arr[i-1]);
double diffNext = Math.abs(arr[i+1] - arr[i]);
double deviation = Math.abs(diffNext - diffPrev) / diffPrev * 100;
if(deviation > thresholdDeviation && Math.abs(diffNext - diffPrev) > thresholdDiff) {
System.out.printf("Abrupt change # %d: (%d, %d, %d)%n", i, arr[i-1], arr[i], arr[i+1]);
i++;
}
//System.out.println(deviation + " : " + Math.abs(diffNext - diffPrev));
}
System.out.println("Finished");
}
}
Output
Started
Abrupt change # 3: (4, 6, 34)
Abrupt change # 6: (38, 41, 67)
Finished
If you're trying to solve a larger problem than just arrays like finding spikes in medical data or images, then you should checkout neural networks.
I've been struggle with question I'm trying to solve as part of test preparation, and I thought I could use your help.
I need to write a Boolean method that takes array with integers (positive and negative), and return true if the array can be split to two equals groups, that the amount of every group's numbers is equals to the other group.
For exmaple, for this array:
int[]arr = {-3, 5, 12, 14, -9, 13};
The method will return true, since -3 + 5 + 14 = 12 + -9 + 13.
For this array:
int[]arr = {-3, 5, -12, 14, -9, 13};
The method will return false since even though -3 + 5 + 14 + -12 = -9 + 13, the amount of numbers in every side of the equation isn't equals.
For the array:
int[]arr = {-3, 5, -12, 14, -9};
The method will return false since array length isn't even.
The method must be recursive, overloading is allowed, every assist method must be recursive too, and I don't need to worry about complexity.
I've been trying to solve this for three hours, I don't even have a code to show since all the things I did was far from the solution.
If someone can at least give me some pseudo code it will be great.
Thank you very much!
You asked for pseudocode, but sometimes it's just as easy and clear to write it as Java.
The general idea of this solution is to try adding each number to either the left or the right of the equation. It keeps track of the count and sum on each side at each step in the recursion. More explanation in comments:
class Balance {
public static void main(String[] args) {
System.out.println(balanced(-3, 5, 12, 14, -9, 13)); // true
System.out.println(balanced(-3, 5, -12, 14, -9, 13)); // false
}
private static boolean balanced(int... nums) {
// First check if there are an even number of nums.
return nums.length % 2 == 0
// Now start the recursion:
&& balanced(
0, 0, // Zero numbers on the left, summing to zero.
0, 0, // Zero numbers on the right, summing to zero.
nums);
}
private static boolean balanced(
int leftCount, int leftSum,
int rightCount, int rightSum,
int[] nums) {
int idx = leftCount + rightCount;
if (idx == nums.length) {
// We have attributed all numbers to either side of the equation.
// Now check if there are an equal number and equal sum on the two sides.
return leftCount == rightCount && leftSum == rightSum;
} else {
// We still have numbers to allocate to one side or the other.
return
// What if I were to place nums[idx] on the left of the equation?
balanced(
leftCount + 1, leftSum + nums[idx],
rightCount, rightSum,
nums)
// What if I were to place nums[idx] on the right of the equation?
|| balanced(
leftCount, leftSum,
rightCount + 1, rightSum + nums[idx],
nums);
}
}
}
This is just a first idea solution. It's O(2^n), which is obviously rather slow for large n, but it's fine for the size of problems you have given as examples.
The problem described is a version of the Partition problem. First note that your formulation is equivalent to deciding whether there is a subset of the input which sums up to half of the sum of all elements (which is required to be an integral number, otherwise the instance cannot be solved, but this is easy to check). Basically, in each recursive step, it is to be decided whether the first number is to be selected into the subset or not, resulting in different recursive calls. If n denotes the number of elements, there must be n/2 (which is required to be integral again) items selected.
Let Sum denote the sum of the input and let Target := Sum / 2 which in the sequel is assumed to be integral. if we let
f(arr,a,count) := true
if there is a subset of arr summing up to a with
exactly count elements
false
otherwise
we obtain the following recursion
f(arr,a,count) = (arr[0] == a && count == 1)
||
(a == 0 && count == 0)
if arr contains only one element
f(arr\arr[0], a, count)
||
f(arr\arr[0], a - arr[0], count -1)
if arr contains more than one element
where || denotes logical disjuction, && denoted logical conjunction and \ denotes removal of an element.
The two cases for a non-singleton array correspond to chosing the first element of arr into the desired subset or its relative complement. Note that in an actual implementation, a would not be actually removed from the array; a starting index, which is used as an additional argument, would be initialized with 0 and increased in each recursive call, eventually reaching the end of the array.
Finally, f(arr,Target,n/2) yields the desired value.
Your strategy for this should be to try all combinations possible. I will try to document how I would go about to get to this.
NOTE that I think the requirement: make every function use recursion is a bit hard, because I would solve that by leaving out some helper functions that make the code much more readable, so in this case I wont do it like that.
With recursion you always want to make progression towards a final solution, and detect when you are done. So we need two parts in our function:
The recursive step: for which we will take the first element of the input set, and try what happens if we add it to the first set, and if that doesn't find a solution we'll try what happens when we add it to the second set.
Detect when we are done, that is when the input set is empty, in that case we either have found a solution or we have not.
A trick in our first step is that after taking the first element of our set, if we try to partition the remainder, we don't want the 2 sets being equal anymore, because we already assigned the first element to one of the sets.
This leads to a solution that follows this strategy:
public boolean isValidSet(MySet<int> inputSet, int sizeDifferenceSet1minus2)
{
if (inputSet.isEmpty())
{
return sizeDifferenceSet1minus2== 0;
}
int first = inptuSet.takeFirst();
return isValidSet(inputSet.copyMinusFirst(), sizeDifferenceSet1minus2+ first)
|| isValidSet(inputSet.copyMinusFirst(), sizeDifferenceSet1minus2+ -1 * first);
}
This code requires some help functions that you will still need to implement.
What it does is first test if we have reached the end condition, and if so returns if this partition is successful. If we still have elements left in the set, we try what happens if we add it to the first set and then what happens when adding it to the second set. Note that we don't actually keep track of the sets, we just keep track of the size difference between set 1 minus 2, decreasing the (but instead you could pass along both sets).
Also note that for this implementation to work, you need to make copies of the input set and not modify it!
For some background information: This problem is called the Partition Problem, which is famous for being NP-complete (which means it probably is not possible to solve it efficiently for large amounts of input data, but it is very easy to verify that a partitioning is indeed a solution.
Here is a verbose example:
public static void main(String[] args)
{
System.out.println(balancedPartition(new int[] {-3, 5, 12, 14, -9, 13})); // true
System.out.println(balancedPartition(new int[] {-3, 5, -12, 14, -9, 13})); // false
System.out.println(balancedPartition(new int[] {-3, 5, -12, 14, -9})); // false
}
public static boolean balancedPartition(int[] arr)
{
return balancedPartition(arr, 0, 0, 0, 0, 0, "", "");
}
private static boolean balancedPartition(int[] arr, int i, int groupA, int groupB, int counterA, int counterB, String groupAStr, String groupBStr)
{
if (groupA == groupB && counterA == counterB && i == arr.length) // in case the groups are equal (also in the amount of numbers)
{
System.out.println(groupAStr.substring(0, groupAStr.length() - 3) + " = " + groupBStr.substring(0, groupBStr.length() - 3)); // print the groups
return true;
}
if (i == arr.length) // boundaries checks
return false;
boolean r1 = balancedPartition(arr, i + 1, groupA + arr[i], groupB, counterA + 1, counterB, groupAStr + arr[i] + " + ", groupBStr); // try add to group 1
boolean r2 = balancedPartition(arr, i + 1, groupA, groupB + arr[i], counterA, counterB + 1, groupAStr, groupBStr + arr[i] + " + "); // try add to group 2
return r1 || r2;
}
Output:
-3 + 5 + 14 = 12 + -9 + 13 // one option for the first array
12 + -9 + 13 = -3 + 5 + 14 // another option for the first array
true // for the first array
false // for the second array
false // for the third array
For a school project i had to code the cracker barrel triangle peg game, http://www.joenord.com/puzzles/peggame/3_mid_game.jpg heres a link to what it is. I made a triangle symmetric matrix to represent the board
|\
|0\
|12\
|345\
|6789\....
public int get( int row, int col )
{
if (row >= col) // prevents array out of bounds
return matrix[row][col];
else
return matrix[col][row];
} //
and here is my get() function that's the form of the matrix. if i try to access get(Row, Column) and row>column i access get(column, row) its set that way in all my methods. This way its easier to prevent out of bounds stuff from happening. empty spots in the triangle are set to 0, all pegs are set to 1. There's unrelated reason why i didn't use a Boolean array. The project is a AI project and to develop a heuristic search algorithm i need access to the number of pegs adjacent to each other. I can easily prevent most duplicates by simply dividing by total/2 since it will count every adjacent in both directions. I don't know how to prevent duplicate checks when i cross that middle line. It only matters on the 0 2 5 and 9 positions. If i really wanted to i could write a separate set of rules for those positions, but that doesn't feel like good coding and is not functional for different sized triangles. any input is welcome and if you need more information feel free to ask.
0, 2, 5, 9 is not an arithmetic progression. The finite differences 2-0 = 2, 5-2 = 3, 9 - 5 = 4 are in arithmetic progression. So the sequence is 0, 0 + 2 = 2, 2 + 3 = 5, 5 + 4 = 9, 9 + 5 = 14, 14 + 6 = 20, etc. They are one less than the triangle numbers 1, 3, 6, 10, 15, 21, etc. The nth triangle number has a short-cut expression, n(n+1)/2 (where n starts at 1, not 0). So your numbers are n(n+1)/2 - 1 for n = 1, 2, 3, ...
Anyway, the situation you are experiencing should tell you that setting it up so get(row,col) == get(col,row) is a bad idea. What I would do instead is to set it up so that your puzzle starts at index 1,1 and increases from there; then put special values -1 in the matrix entries 0,y and x,0 and anything with col > row. You can check for out of bounds conditions just by checking for the value -1 in a cell. Then to count the number of pegs surrounding a position you always do the same thing: check all four adjacent cells for 1's.
I have a set of key codes, with values (mod 4 of course), 0 to 3 corresponding to the keys down, left, up, right, in that order. I need to convert these key codes into x and y directions, with a positive x indicating a location left of the origin, and an positive y indicating a location below the origin. The way I see it, I have two ways of doing this:
using arrays:
int [] dx = {0, -1, 0, 1};
int [] dy = {1, 0, -1, 0};
int x = dx[kc];
int y = dy[kc];
or using arithmetic:
int x = (kc%2)*(((kc/2)%2)*2 - 1);
int y = ((kc+1)%2)*(((kc/2)%2)*-2 + 1);
which would be more efficient?
It probably depends on the language. I would think the integer representation would be more efficient. Or better yet, if you need space you could represent directions with bit strings. You would need 4 bits for the four directions. Most ints are 4 bytes, which is 8x the storage! Then again, this probably doesn't affect anything unless you are storing a LOT of these.
I would abstract away the representation with direction methods (getDirection(), setDirection(), etc) and then try running your program with several different kinds.
Edit: woops, I meant to make this a comment, not an answer. Sorry about that.
Profiling would be your friend, but, I would separate your constants out in a different way. Consider:
private static final int[][] directions = {
{0, 1},
{-1, 0},
{0, -1},
{1, 0}
};
Then you can do it as simply:
x = directions[kc][0];
y = directions[kc][1];
First of all, I wouldn't really worry about the efficiency of either approach since it's very unlikely that this code will be the bottleneck in any real world application. I do however, think that the first approach one is much more readable. So if you value your maintenance and debugging time, that's the way to go.
If performance is that important, and this piece of code is critical, you should actually benchmark the two approaches. Use something like google caliper for that.
Second, you can optimize the second approach by replacing the (somewhat slow) modulus operation with a logical AND (x &= 0xfffffffe is the same as x%=2 only faster, assuming x is an int). And replacing the multiplication by 2 with a logical left shift (so x<<1 instead of x*2).
Here's yet another way to do the conversion.
package com.ggl.testing;
import java.awt.Point;
public class Convert {
public Point convertDirection(int index) {
// 0 to 3 corresponds to the keys down, left, up, right
Point[] directions = { new Point(0, 1), new Point(-1, 0),
new Point(0, -1), new Point(1, 0) };
return directions[index];
}
}