For a school project i had to code the cracker barrel triangle peg game, http://www.joenord.com/puzzles/peggame/3_mid_game.jpg heres a link to what it is. I made a triangle symmetric matrix to represent the board
|\
|0\
|12\
|345\
|6789\....
public int get( int row, int col )
{
if (row >= col) // prevents array out of bounds
return matrix[row][col];
else
return matrix[col][row];
} //
and here is my get() function that's the form of the matrix. if i try to access get(Row, Column) and row>column i access get(column, row) its set that way in all my methods. This way its easier to prevent out of bounds stuff from happening. empty spots in the triangle are set to 0, all pegs are set to 1. There's unrelated reason why i didn't use a Boolean array. The project is a AI project and to develop a heuristic search algorithm i need access to the number of pegs adjacent to each other. I can easily prevent most duplicates by simply dividing by total/2 since it will count every adjacent in both directions. I don't know how to prevent duplicate checks when i cross that middle line. It only matters on the 0 2 5 and 9 positions. If i really wanted to i could write a separate set of rules for those positions, but that doesn't feel like good coding and is not functional for different sized triangles. any input is welcome and if you need more information feel free to ask.
0, 2, 5, 9 is not an arithmetic progression. The finite differences 2-0 = 2, 5-2 = 3, 9 - 5 = 4 are in arithmetic progression. So the sequence is 0, 0 + 2 = 2, 2 + 3 = 5, 5 + 4 = 9, 9 + 5 = 14, 14 + 6 = 20, etc. They are one less than the triangle numbers 1, 3, 6, 10, 15, 21, etc. The nth triangle number has a short-cut expression, n(n+1)/2 (where n starts at 1, not 0). So your numbers are n(n+1)/2 - 1 for n = 1, 2, 3, ...
Anyway, the situation you are experiencing should tell you that setting it up so get(row,col) == get(col,row) is a bad idea. What I would do instead is to set it up so that your puzzle starts at index 1,1 and increases from there; then put special values -1 in the matrix entries 0,y and x,0 and anything with col > row. You can check for out of bounds conditions just by checking for the value -1 in a cell. Then to count the number of pegs surrounding a position you always do the same thing: check all four adjacent cells for 1's.
Related
I need a fast way to find maximum value when intervals are overlapping, unlike finding the point where got overlap the most, there is "order". I would have int[][] data that 2 values in int[], where the first number is the center, the second number is the radius, the closer to the center, the larger the value at that point is going to be. For example, if I am given data like:
int[][] data = new int[][]{
{1, 1},
{3, 3},
{2, 4}};
Then on a number line, this is how it's going to looks like:
x axis: -2 -1 0 1 2 3 4 5 6 7
1 1: 1 2 1
3 3: 1 2 3 4 3 2 1
2 4: 1 2 3 4 5 4 3 2 1
So for the value of my point to be as large as possible, I need to pick the point x = 2, which gives a total value of 1 + 3 + 5 = 9, the largest possible value. It there a way to do it fast? Like time complexity of O(n) or O(nlogn)
This can be done with a simple O(n log n) algorithm.
Consider the value function v(x), and then consider its discrete derivative dv(x)=v(x)-v(x-1). Suppose you only have one interval, say {3,3}. dv(x) is 0 from -infinity to -1, then 1 from 0 to 3, then -1 from 4 to 6, then 0 from 7 to infinity. That is, the derivative changes by 1 "just after" -1, by -2 just after 3, and by 1 just after 6.
For n intervals, there are 3*n derivative changes (some of which may occur at the same point). So find the list of all derivative changes (x,change), sort them by their x, and then just iterate through the set.
Behold:
intervals = [(1,1), (3,3), (2,4)]
events = []
for mid, width in intervals:
before_start = mid - width - 1
at_end = mid + width
events += [(before_start, 1), (mid, -2), (at_end, 1)]
events.sort()
prev_x = -1000
v = 0
dv = 0
best_v = -1000
best_x = None
for x, change in events:
dx = x - prev_x
v += dv * dx
if v > best_v:
best_v = v
best_x = x
dv += change
prev_x = x
print best_x, best_v
And also the java code:
TreeMap<Integer, Integer> ts = new TreeMap<Integer, Integer>();
for(int i = 0;i<cows.size();i++) {
int index = cows.get(i)[0] - cows.get(i)[1];
if(ts.containsKey(index)) {
ts.replace(index, ts.get(index) + 1);
}else {
ts.put(index, 1);
}
index = cows.get(i)[0] + 1;
if(ts.containsKey(index)) {
ts.replace(index, ts.get(index) - 2);
}else {
ts.put(index, -2);
}
index = cows.get(i)[0] + cows.get(i)[1] + 2;
if(ts.containsKey(index)) {
ts.replace(index, ts.get(index) + 1);
}else {
ts.put(index, 1);
}
}
int value = 0;
int best = 0;
int change = 0;
int indexBefore = -100000000;
while(ts.size() > 1) {
int index = ts.firstKey();
value += (ts.get(index) - indexBefore) * change;
best = Math.max(value, best);
change += ts.get(index);
ts.remove(index);
}
where cows is the data
Hmmm, a general O(n log n) or better would be tricky, probably solvable via linear programming, but that can get rather complex.
After a bit of wrangling, I think this can be solved via line intersections and summation of function (represented by line segment intersections). Basically, think of each as a triangle on top of a line. If the inputs are (C,R) The triangle is centered on C and has a radius of R. The points on the line are C-R (value 0), C (value R) and C+R (value 0). Each line segment of the triangle represents a value.
Consider any 2 such "triangles", the max value occurs in one of 2 places:
The peak of one of the triangle
The intersection point of the triangles or the point where the two triangles overall. Multiple triangles just mean more possible intersection points, sadly the number of possible intersections grows quadratically, so O(N log N) or better may be impossible with this method (unless some good optimizations are found), unless the number of intersections is O(N) or less.
To find all the intersection points, we can just use a standard algorithm for that, but we need to modify things in one specific way. We need to add a line that extends from each peak high enough so it would be higher than any line, so basically from (C,C) to (C,Max_R). We then run the algorithm, output sensitive intersection finding algorithms are O(N log N + k) where k is the number of intersections. Sadly this can be as high as O(N^2) (consider the case (1,100), (2,100),(3,100)... and so on to (50,100). Every line would intersect with every other line. Once you have the O(N + K) intersections. At every intersection, you can calculate the the value by summing the of all points within the queue. The running sum can be kept as a cached value so it only changes O(K) times, though that might not be posible, in which case it would O(N*K) instead. Making it it potentially O(N^3) (in the worst case for K) instead :(. Though that seems reasonable. For each intersection you need to sum up to O(N) lines to get the value for that point, though in practice, it would likely be better performance.
There are optimizations that could be done considering that you aim for the max and not just to find intersections. There are likely intersections not worth pursuing, however, I could also see a situation where it is so close you can't cut it down. Reminds me of convex hull. In many cases you can easily reduce 90% of the data, but there are cases where you see the worst case results (every point or almost every point is a hull point). For example, in practice there are certainly causes where you can be sure that the sum is going to be less than the current known max value.
Another optimization might be building an interval tree.
The North Carolina Lottery offers several draw games, two of which are Pick 3 and Pick 4. You pick 3 or 4 digits, respectively, between 0 and 9 (inclusive), and the numbers can repeat (e.g., 9-9-9 is a valid combination). I'll use Pick 3 for this example, because it's easier to work with, but I am trying to make this a generic solution to work with any number of numbers.
One of the features of Pick 3 and Pick 4 is "1-OFF," which means you win a prize if at least one of the numbers drawn are 1 up or 1 down from the numbers you have on your ticket.
For example, let's say you played Pick 3 and you picked 5-5-5 for your numbers. At least one number must be 1-off in order to win (so 5-5-5 does not win any prize, if you played the game this way). Winning combinations would be:
1 Number 2 Numbers 3 Numbers
-------- --------- ---------
4-5-5 4-4-5 4-4-4
5-4-5 5-4-4 6-6-6
5-5-4 4-5-4 4-4-6
6-5-5 6-6-5 4-6-6
5-6-5 5-6-6 4-6-4
5-5-6 6-5-6 6-4-4
4-5-6 6-6-4
6-5-4 6-4-6
6-4-5
5-6-4
5-4-6
4-6-5
(I think that's all the combinations, but you get the idea).
The most "efficient" solution I could come up with is to have arrays that define which numbers are altered, and how:
int[][] alterations = {
// 1 digit
{-1, 0, 0}, {0, -1, 0}, {0, 0, -1}, {1, 0, 0}, {0, 1, 0}, {0, 0, 1},
// 2 digits
{-1, -1, 0}, ...
};
And then modify the numbers according to each of the alteration arrays:
int[] numbers = {5, 5, 5};
for(int i = 0; i < alterations.length; i++) {
int[] copy = Arrays.copyOf(numbers, numbers.length);
for(int j = 0; j < alterations[i].length; j++) {
// note: this logic does not account for the numbers 0 and 9:
// 1 down from 0 translates to 9, and 1 up from 9 translates
// to 0, but you get the gist of how this is supposed to work
copy[j] += alterations[i][j];
}
printArray(copy);
}
...
private static void printArray(int[] a) {
String x = "";
for(int i : a)
x += i + " ";
System.out.println(x.trim());
}
But I'm wondering if there's a better way to do this. Has anyone come across something like this and has any better ideas?
Sounds like you're looking for backtracking since constructing the alterations array is quite tedious. In your backtracking algorithm you'd construct your candidates, apply the alteration, and check if the resulting combination is valid, if so then you'd print. I suggest you read Steven Skiena's Algorithms Design Manual Chapter 7 for some background information on backtracking and how it can be done with a combinatorial problem.
I have arrays a1 to an each containing m number of elements. I have another symmetric n X n matrix b containing distance between the arrays. I want to select one element from each array x1 to xn limited to the following constraint. (a1 is an array and x1 a single value taken from a1)
For every xi (which was originally aiu) and xj (which was originally ajv), where i is not same as j, and u and v are the original array indices, we have |u - v| <= bij.
The total sum of x1 to xn is the maximum of all possible such sets.
An example
a1 = [1, 2, 3, 8, -1, -1, 0, -1]
a2 = [1, 2, 4, 0, -1, 1, 10, 11]
b = |0, 2|
|2, 0|
The selected values are x1 = 8 and x2 = 4. One can notice that we didn't select 10 or 11 from the second because the nearest possible value for any of them is just 0.
Now when I have only two arrays I can do the following in java in O(n2) time, I guess, and find the maximum sum, which is 12 in this case. How can I achieve better solution for more than 2 arrays?
int[][] a = new int[][]{{1, 2, 3, 8, -1, -1, 1, -1}, {1, 2, 4, 0, -1, 1, 10, 11}};
int[][] b = new int[][]{{0, 2}, {2, 0}};
int maxVal = Integer.MIN_VALUE;
for (int i = 0; i < a[0].length; i++) {
for (int j = Math.max(i - b[0][1], 0); j < Math.min(a[1].length, i + b[0][1]); j++) {
maxVal = Math.max(maxVal, a[0][i] + a[1][j]);
}
}
System.out.println("The max val: "+maxVal);
You can't use dynamic programming here, because there is no optimal substructure: the b_1n entry can ruin a highly valuable path from x_1 to x_{n-1}. So it's probably hard to avoid exponential time in general. However, for a set of b_ij that do reasonably restrict the choices, there is a straightforward backtracking approach that should have reasonable performance:
At each step, a value has been selected from some of the a_i, but no choice has yet been made from the others. (The arrays selected need not be a prefix of the list, or even contiguous.)
If a choice has been made for every array, return (from this recursive call) the score obtained.
Consider, for each pair of a chosen array and a remaining array, the interval of indices available for selection in the latter given the restriction on distance from the choice made in the former.
Intersect these intervals for each remaining array. If any intersection is empty, reject this proposed set of choices and backtrack.
Otherwise, select the remaining array with the smallest set of choices available. Add each choice to the set of proposed choices and recurse. Return the best score found and the choice made to obtain it, if any, or reject and backtrack.
The identification of the most-constrained array is critical to performance: it constitutes a form of fuzzy belief propagation, efficiently pruning future choices incompatible with present choices necessitated by prior choices. Depending on the sort of input you expect, there might be value in doing further prioritization/pruning based on achievable scores.
My 35-line Python implementation, given a 10x10 random matrix of small integers and b_ij a constant 2, ran in a few seconds. b_ij=3 (which allows up to 7 of the 10 values for each pair of arrays!) took about a minute.
We were asked to do the n-queens problem in class, and I came across this bit of code online. The deadline for our submission has already passed, and I turned in a solution that makes use of arrays, but this code interested me, as it required significantly less lines than my solution. I'm not quite sure what is happening in the else statement, so if someone could explain, I would be greatly appreciative! Thanks in advance!
import java.util.Scanner;
public class NQueens {
private static int size; //n
private static int mask; //
private static int count; //solutions
//Uses recursion to calculate the number of possible solutions, and increments "count".
public static void backtrack(int y, int left, int down, int right) {
int bitmap;
int bit;
if (y == size) {
count++;
}
else {
bitmap = mask & ~(left | down | right);
while (bitmap != 0) {
bit = -bitmap & bitmap;
bitmap ^= bit;
backtrack(y + 1, (left | bit) << 1, down | bit, (right | bit) >> 1);
}
}
}
//main
public static void main(String[] args) {
Scanner keyboard = new Scanner(System.in);
System.out.print("Enter the number of queens: ");
size = keyboard.nextInt();
count = 0;
mask = (1 << size) - 1;
backtrack(0, 0, 0, 0);
System.out.println("The valid number of arrangements is " + count);
}
}
I'll give it here in loose terms with signposts to the details.
What is the overall approach?
As the method name hints, backtrack implements a "backtracking" search for solutions. https://en.wikipedia.org/wiki/Backtracking That means that it drives down every possible path, making a decision about each branch whether the quest is still possible, abandoning any path the instant it is proven not to be viable, and backtracking to the most recent decision point to try another path. Quoting that Wikipedia article, regarding the N-queens problem, "In the common backtracking approach, the partial candidates are arrangements of k queens in the first k rows of the board, all in different rows and columns [and diagonals -Ed.]. Any partial solution that contains two mutually attacking queens can be abandoned."
By "partial candidate" we mean a sequence of placements of each queen starting with k == 0 (solution still possible no matter what the next choice is), then 1 (fewer solutions possible because some choices put queens in attacking positions), then 2, and so on until N. With each placement you put the queen in a new row, because obviously any previous row is not a viable choice.
What is a "placement" in the algorithm?
To model placement of a queen in an NxN chess board, you need a data structure to represent that board, and whether a square is occupied, and whether two occupied squares are in a mutual attack relationship.
The data structure in the example is a bitmap. Here's where it gets tricky. You need to be familiar with bit manipulation to follow it.
private static int size; //n
private static int mask;
private static int count; //solutions
size is the number of queens, equal to the number of rows occupied.
count is the number of solutions found
mask is a sequence of size consecutive 1 bits, used to mask off int values to the size of the problem. In the eight-queens example, it will equal 0xff, or 0b1111_1111.
backtrack(int y, int left, int down, int right)
y is easy, it's the current number of queens placed so far, equivalently, the number of rows that have queens so far. The other three values use bit-operation trickery to reveal whether there are attack vectors computable in three directions. This is where it gets murky. I haven't gone all the way through it but I'll indicate how to proceed to full understanding.
bitmap = mask & ~(left | down | right);
Applies the OR operation between the arguments, and bit-flips the result.
bit = -bitmap & bitmap;
Takes the two's complement of the current value of bitmap (which will not be 0 here), and masks that against the original value.
bitmap ^= bit;
Applies the XOR operation to bitmap from the bit variable, which flips any bit in bitmap that has a 1 in the corresponding position in bit.
backtrack(y + 1, (left | bit) << 1, down | bit, (right | bit) >> 1);
Applies the recursion to the next queen (row), setting the new left to the old one merged with bit and shifted left to indicate looking at a new file ("file" in the chess sense). It shifts the right | bit merge to the right one to indicate a new file, and it leaves the down | bit merge indicating the current file.
Loosely the result of this is to zero out the positions that have mutual attack vectors. Every different combination of file placement is tried except ones that reach full 0 before all the queens have been placed.
Exactly how those bits indicate attack vectors is left as an exercise. How they migrate around the size-bit field is a matter of pencil-and-paper tracking the loop line by line.
EDIT: I didn't mention it, but this algorithm handles the diagonals, as is implicit in the rules.
EDIT: Results from a sample run of a version of the program:
size, solutions, backtracks, millisec
0, 1, 1, 0
1, 1, 2, 0
2, 0, 3, 0
3, 0, 6, 0
4, 2, 17, 0
5, 10, 54, 0
6, 4, 153, 0
7, 40, 552, 0
8, 92, 2057, 0
9, 352, 8394, 0
10, 724, 35539, 1
11, 2680, 166926, 0
12, 14200, 856189, 16
13, 73712, 4674890, 116
14, 365596, 27358553, 702
15, 2279184, 171129072, 4318
16, 14772512, 1141190303, 30321
17, 95815104, 8017021932, 208300
I have a 2D array of varying size, where height can take on any value.
int array[][] = new int[height][height]
Let's say I have a 3 x 3 array with the values of:
7 8 9
6 5 4
1 2 3
Would it be possible to check to see if 1 is adjacent to 2, 2 adjacent to 3, 3 adjacent to 4, 4 adjacent to 5 and so on? Adjacent here being if they are next to each other vertically, horizontally and diagonally.
So basically, there is a link from number 1 to 9 (or maximum number - e.g. if board is a 4x4, then from 1 to 16).
This is what I have been been able to make. It's a good solution, although takes a little more space. Definitely not the best solution for this though. Use of math might be needed for a more optimum solution. I am not that good at math.
//assuming the height as variable - 'r'
//take an input of some element, let's assume a[0][0], taken inside a[e][e]
int f, f1, f2, arflg=0;
int arr = new int[r*r];
for(int i=0;i<r;i++){
for(int j=0;j<r;j++){
arr[arflg]=a[i][j];
arflg++;
if(a[i][j]==a[e][e]) f1 = arflg; //location of element entered on the array
if(a[i][j]==a[e][e]-1) f2 = arflg; //location of element's predecessor on the array
}
}
f = f2 - f1;
if(f==1){
//forward hortizontal
}elseif(f==-1){
//backward horizontal
}elseif(f==r){
//below
}elseif(f==r-1){
//below left
}elseif(f==r+1){
//below right
}elseif(f==-r){
//above
}elseif(f==(-r-1)){
//above left
}elseif(f==(-r+1){
//above right
}