Pretty good heuristic evaluation rules for big TicTacToe 5x5 board - java

I have created TicTacToe game. I use minmax algorithm.
When the board is 3x3 I just calculate every possible move for a game till the end and -1 for loss, 0 for tie, 1 for win.
When it comes to 5x5 it can't be done(to many options(like 24^24) so I have created evaluation method which gives: 10^0 for one CIRCLE inline, 10^1 for 2 CIRCLE inline, ..., 10^4 for 5 CIRCLES inline, but it is useless.
Does anybody have better idea for assesment?
Example:
O|X|X| | |
----------
|O| | | |
----------
X|O| | | |
----------
| | | | |
----------
| | | | |
Evaluation -10, 2 circles across once and inline once (+200), 2 crosses inline(-100), and -1 three times and + 1 three times for single cross and circle.
This is my evaluation method now:
public void setEvaluationForBigBoards() {
int evaluation = 0;
int howManyInLine = board.length;
for(; howManyInLine > 0; howManyInLine--) {
evaluation += countInlines(player.getStamp(), howManyInLine);
evaluation -= countInlines(player.getOppositeStamp(), howManyInLine);
}
this.evaluation = evaluation;
}
public int countInlines(int sign, int howManyInLine) {
int points = (int) Math.pow(10, howManyInLine - 1);
int postiveCounter = 0;
for(int i = 0; i < board.length; i++) {
for(int j = 0; j < board[i].length; j++) {
//czy od tego miejsca jest cos po przekatnej w prawo w dol, w lewo w dol, w dol, w prawo
if(toRigth(i, j, sign, howManyInLine))
postiveCounter++;
if(howManyInLine > 1) {
if(toDown(i, j, sign, howManyInLine))
postiveCounter++;
if(toRightDiagonal(i, j, sign, howManyInLine))
postiveCounter++;
if(toLeftDiagonal(i, j, sign, howManyInLine))
postiveCounter++;
}
}
}
return points * postiveCounter;
}

Number of options (possible sequences of moves) after the first move is 24! and not 24^24. It is still a too much high
number so it is correct to implement an heuristic.
Note that answers about good heuristics are necessarily based on the opinion of the writer so I give my opinion but to find
out what is "the best heuristic" you should make the various ideas playing one against the other in the following way:
take the two heuristics A and B that you want to compare
generate at random a starting configuration
let A play with O and B play with X
from the same starting configuration let A play with X and B play with O
take stats of which one wins more
Now my thoughts about good possible heuristics starting points for an nxn playfield with winning sequence length of n:
since the winning condition for a player it to form a straight sequence of its marks my idea is to use as base values the number of possibilities that each player has still available to built such a straight sequence.
in an empty field both O and X have ideally the possibility to realize the winning sequence in several ways:
horizontal possibilities: n
vertical possibilities: n
diagonal possibilities: 2
total possibilities: 2n+2
in the middle of a round the number of remaining opportunities for a player are calculated as: "the number of rows without opponent's marks + the number of columns without opponent's marks + the number of diagonals without opponent's marks.
instead than calculate all each time it can be considered that:
after a move of one player the umber of still available possibilities are:
unchanged for him
equal or lowered for the opponent (if the mark has been placed in a row/col/diagonal where no marks had already been placed by the considered player)
as heuristic i can propose -
is possible that - k * with k > 1 give better results and in the end this can be related to how a draw is considered with regard to a lose.
One side consideration:
playfield cells are n^2
winning possibilities are 2n+2 if we keep the winning length equal to the field edge size
this give me the idea that the more the size is increased the less interesting is to play because the probability of a draw after a low number of moves (with reference to the playfield area) becomes higher and higher.
for this reason I think that the game with a winning length lower that n (for example 3 independently from the playfield size) is more interesting.
Named l the wining length we have that the number of possibilities is 2*((n+1-l)*(2n+1-l)) = O(n^2) and so well proportioned with the field area.

Related

Minimum number of disconnections

There are N cities connected by N-1 roads.
Each adjacent pair of cities is connected by bidirectional roads i.e.
i-th city is connected to i+1-th city for all 1 <= i <= N-1, given as below:
1 --- 2 --- 3 --- 4...............(N-1) --- N
We got M queries of type (c1, c2) to disconnect the pair of cities c1 and c2.
For that we decided to block some roads to meet all these M queries.
Now, we have to determine the minimum number of roads that needs to be
blocked such that all queries will be served.
Example :
inputs:
- N = 5 // number of cities
- M = 2 // number of query requests
- C = [[1,4], [2,5]] // queries
output: 1
Approach :
1. Block the road connecting the cities C2 and C3 and all queries will be served.
2. Thus, the minimum roads needs to be blocked is 1.
Constraints :
- 1 <= T <= 2 * 10^5 // numner of test cases
- 2 <= N <= 2 * 10^5 // number of cities
- 0 <= M <= 2 * 10^5 // number of queries
- 1 <= C(i,j) <= N
It is guaranteed that the sum of N over T test cases doesn't exceed 10^6.
It is also guaranteed that the sum of M over T test cases doesn't exceed 10^6.
My Approach :
Solved this problem using Min-Heap, but not sure if it will work
on all the edges(corner) test cases and has the optimal
time/space complexities.
public int solve(int N, int M, Integer[][] c) {
int minCuts = 0;
if(M == 0) return 0;
// sorting based on the start city in increasing order.
Arrays.sort(c, (Integer[] a, Integer[] b) -> {
return a[0] - b[0];
});
PriorityQueue<Integer> minHeap = new PriorityQueue<>();
// as soon as I finds any end city in my minHeap which is less than equal to my current start city, I increment mincuts and remove all elements from the minHeap.
for(int i = 0; i < M ; i++) {
int start = c[i][0];
int end = c[i][1];
if(!minHeap.isEmpty() && minHeap.peek() <= start) {
minCuts += 1;
while(!minHeap.isEmpty()) {
minHeap.poll();
}
}
minHeap.add(end);
}
return minCuts + 1;
}
Is there any any edge test-case for which this approach will fail?
For each query, there is an (inclusive) interval of acceptable cut points, so the task is to find the minimum number of cut points that intersect all intervals.
The usual algorithm for this problem, which you can see here, is an optimized implementation of this simple procedure:
Select the smallest interval end as a cut point
Remove all the intervals that it intersects
Repeat until there are no more intervals.
It's easy to prove that that it's always optimal to select the smallest interval end:
The smallest cut point must be <= the smallest interval end, because otherwise that interval won't get cut.
If an interval intersects any point <= the smallest interval end, then it must also intersect the smallest interval end.
The smallest interval end is therefore an optimal choice for the smallest cut point.
It takes a little more work, but you can prove that your algorithm is also an implementation of this procedure.
First, we can show that the smallest interval end is always the first one popped off the heap, because nothing is popped until we find a starting point greater than a known endpoint.
Then we can show that the endpoints removed from the heap correspond to exactly the intervals that are cut by that first endpoint. All of their start points must be <= that first endpoint, because otherwise we would have removed them earlier. Note that you didn't adjust your queries into inclusive intervals, so your test says peek() <= start. If they were adjusted to be inclusive, it would say peek() < start.
Finally, we can trivially show that there are always unpopped intervals left on the heap, so you need that +1 at the end.
So your algorithm makes the same optimal selection of cut points. It's more complicated than the other one, though, and harder to verify, so I wouldn't use it.

How to find the point that gives the maximum value fast? Java or c++ code please

I need a fast way to find maximum value when intervals are overlapping, unlike finding the point where got overlap the most, there is "order". I would have int[][] data that 2 values in int[], where the first number is the center, the second number is the radius, the closer to the center, the larger the value at that point is going to be. For example, if I am given data like:
int[][] data = new int[][]{
{1, 1},
{3, 3},
{2, 4}};
Then on a number line, this is how it's going to looks like:
x axis: -2 -1 0 1 2 3 4 5 6 7
1 1: 1 2 1
3 3: 1 2 3 4 3 2 1
2 4: 1 2 3 4 5 4 3 2 1
So for the value of my point to be as large as possible, I need to pick the point x = 2, which gives a total value of 1 + 3 + 5 = 9, the largest possible value. It there a way to do it fast? Like time complexity of O(n) or O(nlogn)
This can be done with a simple O(n log n) algorithm.
Consider the value function v(x), and then consider its discrete derivative dv(x)=v(x)-v(x-1). Suppose you only have one interval, say {3,3}. dv(x) is 0 from -infinity to -1, then 1 from 0 to 3, then -1 from 4 to 6, then 0 from 7 to infinity. That is, the derivative changes by 1 "just after" -1, by -2 just after 3, and by 1 just after 6.
For n intervals, there are 3*n derivative changes (some of which may occur at the same point). So find the list of all derivative changes (x,change), sort them by their x, and then just iterate through the set.
Behold:
intervals = [(1,1), (3,3), (2,4)]
events = []
for mid, width in intervals:
before_start = mid - width - 1
at_end = mid + width
events += [(before_start, 1), (mid, -2), (at_end, 1)]
events.sort()
prev_x = -1000
v = 0
dv = 0
best_v = -1000
best_x = None
for x, change in events:
dx = x - prev_x
v += dv * dx
if v > best_v:
best_v = v
best_x = x
dv += change
prev_x = x
print best_x, best_v
And also the java code:
TreeMap<Integer, Integer> ts = new TreeMap<Integer, Integer>();
for(int i = 0;i<cows.size();i++) {
int index = cows.get(i)[0] - cows.get(i)[1];
if(ts.containsKey(index)) {
ts.replace(index, ts.get(index) + 1);
}else {
ts.put(index, 1);
}
index = cows.get(i)[0] + 1;
if(ts.containsKey(index)) {
ts.replace(index, ts.get(index) - 2);
}else {
ts.put(index, -2);
}
index = cows.get(i)[0] + cows.get(i)[1] + 2;
if(ts.containsKey(index)) {
ts.replace(index, ts.get(index) + 1);
}else {
ts.put(index, 1);
}
}
int value = 0;
int best = 0;
int change = 0;
int indexBefore = -100000000;
while(ts.size() > 1) {
int index = ts.firstKey();
value += (ts.get(index) - indexBefore) * change;
best = Math.max(value, best);
change += ts.get(index);
ts.remove(index);
}
where cows is the data
Hmmm, a general O(n log n) or better would be tricky, probably solvable via linear programming, but that can get rather complex.
After a bit of wrangling, I think this can be solved via line intersections and summation of function (represented by line segment intersections). Basically, think of each as a triangle on top of a line. If the inputs are (C,R) The triangle is centered on C and has a radius of R. The points on the line are C-R (value 0), C (value R) and C+R (value 0). Each line segment of the triangle represents a value.
Consider any 2 such "triangles", the max value occurs in one of 2 places:
The peak of one of the triangle
The intersection point of the triangles or the point where the two triangles overall. Multiple triangles just mean more possible intersection points, sadly the number of possible intersections grows quadratically, so O(N log N) or better may be impossible with this method (unless some good optimizations are found), unless the number of intersections is O(N) or less.
To find all the intersection points, we can just use a standard algorithm for that, but we need to modify things in one specific way. We need to add a line that extends from each peak high enough so it would be higher than any line, so basically from (C,C) to (C,Max_R). We then run the algorithm, output sensitive intersection finding algorithms are O(N log N + k) where k is the number of intersections. Sadly this can be as high as O(N^2) (consider the case (1,100), (2,100),(3,100)... and so on to (50,100). Every line would intersect with every other line. Once you have the O(N + K) intersections. At every intersection, you can calculate the the value by summing the of all points within the queue. The running sum can be kept as a cached value so it only changes O(K) times, though that might not be posible, in which case it would O(N*K) instead. Making it it potentially O(N^3) (in the worst case for K) instead :(. Though that seems reasonable. For each intersection you need to sum up to O(N) lines to get the value for that point, though in practice, it would likely be better performance.
There are optimizations that could be done considering that you aim for the max and not just to find intersections. There are likely intersections not worth pursuing, however, I could also see a situation where it is so close you can't cut it down. Reminds me of convex hull. In many cases you can easily reduce 90% of the data, but there are cases where you see the worst case results (every point or almost every point is a hull point). For example, in practice there are certainly causes where you can be sure that the sum is going to be less than the current known max value.
Another optimization might be building an interval tree.

Find last int (1-9) stored in long of bits (each int represented by 4 bits)

I am working on a Tic-Tac-Toe AI, and want to find the last move (Opponents last move before current turn) using a long provided by the Game Engine.
Each space is represented by a single digit integer 1-9 (which i will subtract 1 from to get moves 0-8, plus 9 for off-board moves which are stored in the long as 0xF).
0xE is used to represent NULL, but will be treated by my program the same as an off-board move.
Here is how the game state is encoded:
Used to encode game State, first 4 bits are first move, second 4 bits second move, (4 * 9 = 36 bits) bits 33-36 are the last Move. Each move is the coordinate singleton + 1, therefore the tictactoe board is recorded as...
1 | 2 | 3
4 | 5 | 6
7 | 8 | 9
Normal equation for singleton is row*3+col, but you cannot record a state as 0, therefore game state moves are row*3+col + 1, note difference Coordinate singleton is 0..8, board game state position is 1..9;
1 | 2 | 3
4 | 5 | 6
7 | 8 | 9
The game state 0x159, X first move 9; O move 2 is 5;move 3 X is 1
X _ _
_ O _
_ _ 9
Sets off board set all 4 bits (aka 0xf).
e.g., 0x12f45, On X's second move (game move 3)
X picked a Coordinate outside the tictactoe range.
Duplicate guesses onto occupied square are just saved
e.g., 0x121 implies X has used position 1 on both his
first and second move
Null coordinate usually caused by exception is saved as 0xE
e.g., 0x1E3; implies on game move 2, O first move, O throw an exception
most likely causes index array out of bounds
As of now, here is how I am finding the last move using the engine's game state:
private int LastMoveFinder(final Board brd, int move)
{
char prevMove = Long.toHexString(brd.getGameState()).charAt(0);
if(prevMove == 'f' || prevMove == 'e')
return 9;
else
return Character.getNumericValue(prevMove) - 1;
}
But, I am sure there is a faster way (performance wise) to find the last move using some sort of bitshift method, as our AI's will be tested against each other for speed (nanoSec/move) and win-tie-loss ratio.
I have read up on bitshifting and searched all over stackoverflow for answers to questions like mine, but nothing I have tried to implement into my program has worked.
I am sure i'm missing something simple, but have not taken a course that covers bitshifting and masking yet, so I am at somewhat of a loss.
Thanks for your help.
You can get 4 bits of a int by AND'ing it with the bitmask 0xf shifted left by 4 * moveNumber bits. Then, shift the result right by 4 * moveNumber bits to get an int, and apply your move logic to that int. The modified method is:
/**
Assumes moveNumber is 0 indexed.
*/
private int LastMoveFinder(final Board brd, int moveNumber)
{
int moveMask = 0xf << (4 * moveNumber);
int prevMove = (brd.getGameState() & moveMask) >>> (4 * moveNumber);
if (prevMove == 0xf || prevMove == 0xe) {
return 9;
} else {
return prevMove - 1;
}
}

How to generate a distribution of k shots on n enemies

I am developing a space combat game in Java as part of an ongoing effort to learn the language. In a battle, I have k ships firing their guns at a fleet of n of their nefarious enemies. Depending on how many of their enemies get hit by how many of the shots, (each ship fires one shot which hits one enemy), some will be damaged and some destroyed. I want to figure out how many enemies were hit once, how many were hit twice and so on, so that at the end I have a table that looks something like this, for 100 shots fired:
Number of hits | Number of occurences | Total shots
----------------------------------------------------
1 | 30 | 30
2 | 12 | 24
3 | 4 | 12
4 | 7 | 28
5 | 1 | 5
Obviously, I can brute force this for small numbers of shots and enemies by randomly placing each shot on an enemy and then counting how many times each got shot at the end. This method, however, will be very impractical if I've got three million intrepid heroes firing on a swarm of ten million enemies.
Ideally, what I'd like is a way to generate a distribution of how many enemies are likely to be hit by exactly some number of shots. I could then use a random number generator to pick a point on that distribution, and then repeat this process, increasing the number of hits each time, until approximately all shots are accounted for. Is there a general statistical distribution / way of estimating approximately how many enemies get hit by how many shots?
I've been trying to work out something from the birthday problem to figure out the probability of how many birthdays are shared by exactly some number of people, but have not made any significant progress.
I will be implementing this in Java.
EDIT: I found a simplification of this that may be easier to solve: what's the distribution of probabilities that n enemies are not hit at all? I.e. whats the probability that zero are not hit, one is not hit, two are not hit, etc.
It's a similar problem, (ok, the same problem but with a simplification), but seems like it might be easier to solve, and would let me generate the full distribution in a couple of iterations.
You should take a look at multinomial distribution, constraining it to the case where all pi are equal to 1/k (be careful to note that the Wikipedia article swaps the meaning of your k and n).
Previous attempt at answer
Maybe an approach like the following will be fruitful:
the probability that a particular ship is hit by a particular shot is 1/n;
the probability that a given ship is hit exactly once after k shots: h1 = 1/n (1-1/n)k-1;
as above, but exactly twice: h2 = (1/n)2 (1-1/n)k-2, and so on;
expected number of ships hit exactly once: n h1 and so on.
If you have S ships and fire A shots at them, each individual ship's number of hits will follow a binominal distribution where p = 1/S and n = A:
http://en.wikipedia.org/wiki/Binomial_distribution
You can query this distribution and ask:
How likely is it for a ship to be hit 0 times?
How likely is it for a ship to be hit 1 time?
How likely is it for a ship to be hit 2 times?
How likely is it for a ship to be hit (max health) or more times? (Hint: Just subtract 1.0 from everything below)
and multiply these by the number of ships, S, to get the number of ships that you expect to be hit 0, 1, 2, 3, etc times. However, as this is an expectation not a randomly rolled result, battles will go exactly the same way every time.
If you have a low number of ships yet high number of shots, you can roll the binominal distribution once per ship. OR if you have a low number of shots yet high number of ships, you can randomly place each shot. I haven't yet thought of a cool way to get the random distribution (or a random approximation thereof) of high number of shots AND high number of shots, but it would be awesome to find out one :)
I'm assuming that each shot has probability h to hit any bad ship. If h = 0, all shots will miss. If h = 1, all shots will hit something.
Now, let's say you shoot b bullets. The expected value of ships hit is simply Hs = h * b, but these are not unique ships hit.
So we have a list of ships that is Hs long. The chance of any specific enemy ship being hit given N enemy ships is 1/N. Therefore, the chance to be in the first k slots but no the other slots is
(1/N)^k * (1-1/N)^(Hs-k)
Note that this is Marko Topolnik's answer. The problem is that this is a specific ship being in the FIRST k slots, as opposed to being in any combination of k slots. We must modify this by taking into the account the number of combinations of k slots in Hs total slots:
(Hs choose k) * (1/N)^k * (1-1/N)^(Hs-k)
Now we have the chance of a specific ship being in k slots. Well, now we need to consider the entire fleet of N ships:
(Hs choose k) * (1/N)^k * (1-1/N)^(Hs-k) * N
This expression represents the expected number of ships being hit k times within an N sized fleet that was hit with Hs shots in a uniform distribution.
Numerical Sanity Check:
Let's say two bullets hit (Hs=2) and we have two enemy ships (N=2). Assign each ship a binary ID, and let's enumerate the possible hit lists.
00 (ship 0 hit twice)
01
10
11
The number of ships hit once is:
(2 choose 1) * (1/2)^1 * (1-1/2)^(2-1) * 2 = 1
The number of ships hit twice is:
(2 choose 2) * (1/2)^2 * (1-1/2)^(2-2) * 2 = 0.5
To complete the sanity check, we need to make sure our total number of hits equals Hs. Every ship hit twice takes 2 bullets, and every ship hit once takes one bullet:
1*1 + 0.5*2 = 2 == Hs **TRUE**
One more quick example with Hs=3 and N=2:
(3 choose 1) * (1/2)^1 * (1-1/2)^(3-1) * 2
3 * 0.5 * 0.25 * 2 = 0.75
(3 choose 2) * (1/2)^2 * (1-1/2)^(3-2) * 2
3 * 0.5^2 * 0.5 * 2 = 0.75
(3 choose 3) * (1/2)^3 * (1-1/2)^(3-3) * 2
1 * 0.5^3 * 1 * 2 = 0.25
0.75 + 0.75*2 + 0.25*3 = 3 == Hs **TRUE**
Figured out a way of solving this, and finally got around to writing it up in Java. This gives an exact solution for computing the probability of m ships not being hit given k ships and n shots. It is, however, quite computationally expensive. First, a summary of what I did:
The proability is equal to the total number of ways to shoot the ships with exactly m not hit divided by the total number of ways to shoot ships.
P = m_misses / total
Total is k^n, since each shot can hit one of k ships.
To get the numerator, start with nCr(k,m). This is the number of ways of choosing m ships to not be hit. This multiplied by the number of ways of hitting k-m ships without missing any is the total probability.
nCr(k,m)*(k-m_noMiss)
P = ---------------------
k^n
Now to calculate the second term in the numerator. This is the sum across all distributions of shots of how many ways there are for a certain shot distribution to happen. For example, if 2 ships are hit by 3 bullets, and each ship is hit at least once, they can be hit in the following ways:
100
010
001
110
101
011
The shot distributions are equal to the length k-m compositions of k. In this case, we would have [2,1] and [1,2], the length 2 compositions of 3.
For the first composition, [2,1], we can calculate the numbers of ways of generating this by choosing 2 out of the 3 shots to hit the first ship, and then 1 out of the remaining 1 shots to hit the second, i.e. nCr(3,2) * nCr(1,1). Note that we can simplify this to 3!/(2!*1!). This pattern applies to all shot patters, so the number of ways that a certain pattern, p, can occur can be written as n!/prodSum(j=1,k-m,p_j!), in which the notation indicates the product sum from 1 to k-m, j is an index, and p_j represents the jth term in p.
If we define P as the set of all length k-m compositions of n, the probability of m ships not being hit is then:
nCr(k,m)*sum(p is an element of P, n!/prodSum(j=1,k-m,p_j!))
P = --------------------------------------------------------------
k^n
The notation is a bit sloppy since there's not way of putting equations of math symbols into SO, but that's the gist of it.
That being said, this method is horribly inefficient, but I can't seem to find a better one. If someone can simplify this, by all means post your method! I'm curious as to how it can be done.
And the java code for doing this:
import java.util.ArrayList;
import java.util.Arrays;
import org.apache.commons.math3.util.ArithmeticUtils;
class Prob{
public boolean listsEqual(Integer[] integers, Integer[] rootComp){
if(integers.length != rootComp.length){
return false;
}
for (int i = 0; i < integers.length; i++){
if(integers[i] != rootComp[i]){return false;};
}
return true;
}
public Integer[] firstComp(int base, int length){
Integer[] comp = new Integer[length];
Arrays.fill(comp, 1);
comp[0] = base - length + 1;
return comp;
}
public Integer[][] enumerateComps(int base, int length){
//Provides all compositions of base of size length
if(length > base){return null;};
Integer[] rootComp = firstComp(base, length);
ArrayList<Integer[]> compsArray = new ArrayList<Integer[]>();
do {
compsArray.add(rootComp);
rootComp = makeNextComp(rootComp);
} while(!listsEqual(compsArray.get(compsArray.size() - 1), rootComp));
Integer[][] newArray = new Integer[compsArray.size()][length];
int i = 0;
for (Integer[] comp : compsArray){
newArray[i] = comp;
i++;
}
return newArray;
}
public double getProb(int k, int n, int m){
//k = # of bins
//n = number of objects
//m = number of empty bins
//First generate list of length k-m compositions of n
if((n < (k-m)) || (m >= k)){
return 0;
}
int[] comp = new int[n-1];
Arrays.fill(comp, 1);
comp[0] = n - (k-m) + 1;
//Comp is now the first
Integer[][] L = enumerateComps(n, k-m);
double num = 0;
double den = Math.pow(k, n);
double prodSum;
int remainder;
for(Integer[] thisComp : L){
remainder = n;
prodSum = 1;
for(Integer thisVal : thisComp){
prodSum = prodSum * ArithmeticUtils.binomialCoefficient(remainder, thisVal);
remainder -= thisVal;
}
num += prodSum;
}
return num * ArithmeticUtils.binomialCoefficient(k, m) / den;
}
public Integer[] makeNextComp(Integer[] rootComp){
Integer[] comp = rootComp.clone();
int i = comp.length - 1;
int lastVal = comp[i];
i--;
for(; i >=0 ; i--){
if (comp[i] != 1){
//Subtract 1 from comp[i]
comp[i] -= 1;
i++;
comp[i] = lastVal + 1;
i++;
for(;i < comp.length; i++){
comp[i] = 1;
};
return comp;
}
}
return comp;
}
}
public class numbersTest {
public static void main(String[] args){
//System.out.println(ArithmeticUtils.binomialCoefficient(100,50));
Prob getProbs = new Prob();
Integer k = 10; //ships
Integer n = 10; //shots
Integer m = 4; //unscathed
double myProb = getProbs.getProb(k,n,m);
System.out.printf("Probability of %s ships, %s hits, and %s unscathed: %s",k,n,m,myProb);
}
}

All possible combinations of strings from char array in Java

I'm having a school project for Java and I'm assigned too. Now I'm having an issue with a part of the project which I can't figure out.
The application must generate all possible word combinations (can be verified via a dictionary) from a two-dimensional char array (char[][] board). The board is dynamic as the user can choose the scale: 4x4, 5x5, 4x5, 5x4, 4x6, ... So I guess a nested loop wouldn't be approriate here, correct me if I'm wrong. Words must be generated horizontally, verticaly and diagonally. Example of a 4x4 board:
| u | a | u | s |
| n | n | i | i |
| a | o | e | b |
| e | u | e | z |
Code was completely wrong.
Another idea may be to brute force every possible path on the board and then try those saved paths to verify whether it's a word or not.
Thanks in advance!
One way to solve this is:
for each path on the board
if corresponding word in dictionary
print it
To find all paths, you could adapt any graph traversal algorithm.
Now this will be really slow, because there are a great many paths of a board that size (for a board with n cells, we can have at most n * 4 ^ (n - 1) paths, so for a 5 by 5 board, you'd have something like 25 * 2 ^ 50 ~= 10^16 paths.
One way to improve on this is to interleave traversing the graph and checking the dictionary, aborting if the current path's word is not a prefix of a dictionary word:
class Board {
char[][] ch;
boolean[][] visited;
Trie dictionary;
void find() {
StringBuilder prefix = new StringBuilder();
for (int x = 0; x < maxx; x++) {
for (int y = 0; y < maxy; y++) {
walk(x, y, prefix);
}
}
}
void walk(int x, int y, StringBuilder prefix) {
if (!visited[x][y]) {
visited[x][y] = true;
prefix.append(ch[x][y]);
if (dictionary.hasPrefix(prefix)) {
if (dictionary.contains(prefix)) {
System.out.println(prefix);
}
int firstX = Math.max(0, x - 1);
int lastX = Math.min(maxx, x + 1);
int firstY = Math.max(0, y - 1);
int lastY = Math.min(maxy, y + 1);
for (int ax = firstX; ax <= lastX; ax++) {
for (int ay = firstY; ay <= lastY; ay++) {
walk(ax, ay, prefix);
}
}
}
prefix.setLength(prefix.length() - 1);
visited[x][y] = false;
}
}
As you can see, the method walk invokes itself. This technique is known as recursion.
That leaves the matter of finding a data structure for the dictionary that supports efficient prefix queries. The best such data structure is a Trie. Alas, the JDK does not contain an implementation, but fortunately, writing one isn't hard.
Note: The code in this answer has not been tested.
A fairly straightforward way of conceptualizing this is to apply a breadth-first search (BFS) approach to each position (or depth-first, depending upon which tweaks you might later want to make). This would give you all possible letter combinations, up to a level of characters equal to the max depth of the search. Depending on your requirements, such as the longest allowed word, max running time, and if a dictionary is provided via a data structure or file, this may be the key part.
Or, you may need to optimize quite a bit more. If so, consider how you might expedite either a BFS or DFS. What if you did a DFS, but knew three characters in that no word starts with "zzz"? You could shave a lot of time off by not having to traverse all conceivable orderings. To look words up effectively, you might need to make further adjustments. But I'd start with Java's built-in functionality (String.startsWith() comes to mind in this instance), measure performance (perhaps with a limited max word length), and then optimize when and where it's needed.
Start by turning rows , columns and diagonals to Strings, using a simple repetitive method. Then , I would turn it into a StringBuilder or in order to check which words are real and eliminate those which aren't directly from the StringBuilder. then , just print it to a String.There are a lot of useful tools to eliminate or replace words in java.

Categories