My question is more about logic than coding itself:
I am building a python script to simulate poker hands and get statistics from it.
My code works very well for assigning and comparing hands, and the only bottleneck for my script is getting the best combination of cards for each player:
The simulation is for omaha - each player gets 4 cards and the board has 5 cards. Each player must use the best combination of 5 cards (2 from the player's hand and 3 from the board).
The problem is: So far the only way I can think of doing this is comparing every possible hand a player can have and then compare to the other players.
For example, player A has cards A1A2A3A4 and the board is B1B2B3B4B5:
First I am comparing all possible hands player A can get:
[A1A2B1B2B3, A1A2B1B2B4, A1A2B1B2B5,...,A3A4B3B4B5] and get his best hand (that's 60 combinations from each player).
Do this for all players and then check who has the winner hand.
My question is: Do you think there is a way to get each player's best hand without having to check all 60 combinations?
It took me 16 hours to run ~6.5 billion iterations (~2.5 million hands x 60 board combinations x 40 iterations per hand).
Could you also weight in about the efficiency? I don't know if I am trying something impossible to be done here =P
EDIT - SOLVED
Thanks for the inputs, guys. In the end I solved it by using bit manipulation:
https://codereview.stackexchange.com/questions/217597/forming-the-best-possible-poker-hand?noredirect=1#comment421020_217597
Depends on how your evaluation function works. If you just have a black-box that takes a 5-card hand and produces an evaluation, then there's not much you can do other than feed it all 60 5-card hands. But if it can be broken into pieces, it might be possible to bypass some of them.
My code in onejoker, for example, is a 5-step walk through a directed acyclic graph, so I made a special-case function for 7 cards that skips repeating some of the steps for combinations that begin with the same cards. It still ends up evaluating all 21 (7 choose 5) combinations, but in fewer than 5 * 21 steps. You could do something similar for Omaha hands.
I would not divide into 5-hard hands:
Use collections.Counter on the 9-card hand to check for 4k, fh, 3k, 2p, p
Use collections.Counter on map(fget_suit, hand) to check for
flushes
Check for straights if you have to with Counter(x-y for x, y
in zip(hand[1:], hand))
If you really want to see each player's best 5-card hand:
Dump the lowest 4 (if you have four) un-paired, un-suited, un-connected cards.
That won't solve all of it, but it will cut the problem down considerably
Related
I am looking to calculate the percentages that you see on tv when watching poker in my java program. The difference between this question and many others, if unclear, is that the program DOES know all players hands and can therefore determine an accurate percentage. There are many websites such as this one: https://www.pokerlistings.com/online-poker-odds-calculator
where you can input the players cards and it will give you the percentages. I was wondering if there were any all ready set algorithms for this or any java API that i could use in my program. I understand that my question may be a bit unrealistic. So if there is no algorithms or APIs, perhaps someone knows how it is calculated, so that i can try to construct my own algorithm?
In case there is still confusion:
If there are 3 players and their hands are
player 1: As kh
player 2: 2d 3c
player 3: AH AD
I would like to know the percentage that each player has to win preflop, pre-turn and pre-river
Thanks in advance.
The numbers are much smaller than you might expect, so brute-forcing is quite possible. The trick is to disregard the order the cards come out as much as possible. For example, if you're considering preflop probabilities, you only care about the entire board by the river, and not the specific flop, turn and river.
Given 3 hands (6 cards), there's 46 cards remaining in the deck. That means there's choose(46, 5) = 1,370,754 different boards by the river. Enumerate all boards, and count how many times each hand wins on the river. (Or more accurately, compute the equity for each hand, since sometimes 2 or 3 hands will tie on the river). This gives you preflop probabilities, and this is the most expensive thing to compute.
Given the flop, there's only choose(43, 2) = 903 boards possible by the end, so the flop probabilities (which you call the pre-turn), it's very cheap to enumerate all the runouts and compute the average equity for the three hands.
Given the flop and turn, there's only 42 possible river cards. So on the turn (pre-river), it's even cheaper to compute the hand equities.
You'll ideally need a fast hand evaluator, but that should be easy to find online if you don't want to write it yourself.
its inefficient, but you can loop through all of the cards left to be played and see how many each player wins in each scenario. Various speed ups can be made by eliminating possibilities ( IE no flushes if suits havent lined up )
I am trying to build a poker bot in java. I have written the hand evaluation class and I am about to start feeding a neural network but I face a problem. I need the winning odds of every hand for every step: preflop, flop, turn, river.
My problem is that, there are 52 cards and the combinations of 5 cards are 2,598,960. So I need to store 2,598,960 odds for each possible hand. The number is huge and these are only the odds I need for the river.
So I have two options:
Find the odds for every possible hand and every possible deck and every time I start my application load them and kill my memory.
Calculate the odds on the fly and lack processing power.
Is there a 3rd better option to deal with this problem?
3rd option is use the disk... but my first choice would be to calculate odds as you need them.
Why do you need to calculate all combinations of 5 cards, a lot of these hands are worth the same, as there are 4 suits there is repetition between hands.
Personally I would rank your hand based on how many hands beat your hand and how many hands your hand beats. From this you can compute your probability of winning the table by multiplying by number of active hands.
What about ignoring the colors? From 52 possible values, you drop to 13. You only have 6175 options remaining. Of course, colors are important for a flush - but here, it is pretty much binary - are all the colors the same or not? So we are at 12350 (including some impossible combinations, in fact it is 7462 as in the others, a number is contained more than once, so the color must differ).
If the order is important (e.g. starting hand, flip, flop, river or how is it called), it will be a lot more, but it is still less than your two millions. Try simplifying your problems and you'll realize they can be solved.
ok so i am making a blackjack program
that uses the output box. My problem here is trying to get a sort of help for the user.
i need help finding out what to do at this point:
if (y.equalsIgnoreCase("Y"))
{
if(userHand.getBlackjackValue()+10<21)
{
System.out.println("You should hit.");
}
if(userHand.getBlackjackValue()+10>21)
{
}
}
The problem is at the second inner if statement. how should it be determined whether or not the player should continue hitting or should stand. I'll include the class as well as other classes in the package pertaining to the program. i am thinking that i might have to add more methods to the project in order to make it work
https://sites.google.com/site/np2701/
if u can please point out some convoluted code that i can fix up, thanks
If card counting is out of scope, use a basic strategy table for the rules you are using (number of decks, etc): http://wizardofodds.com/games/blackjack/strategy/calculator/ - you should index into the table based on your hand's point value and the dealer's card, and return the option stored in the table. You might choose to store it in the code as a two dimensional array, or load it from a file. You might store it as characters and interpret what the characters, mean, or as an enum, for example you might call the enum Hints with members Hit, Stand, Split, etc.
A basic strategy table is guaranteed to provide the best odds of success if card counting is ignored, because we take all of the relevant state and chose the statistically best option.
If we wish to account for card counting too, then we must keep track of the True Count (the running high-low count divided by the number of decks left), and for certain states (player hand score vs dealer revealed card) instead of always doing the same action, we do one action if the True Count is above x and another if it is below x. In addition, you should bet next to nothing if the true count is low (below 1) and bet more and more as it increases past 1, but not so much more you run the risk of bankruptcy. Read more here http://wizardofodds.com/games/blackjack/card-counting/high-low/
To represent such an index programatically, I would make an object with three fields: the below-index action, the above-index action and the index value.
If you really want to suggest the proper play to the user, you need to look up the basic strategy for the game you're simulating. These tables are based on the player's total (and you have to know whether it's soft or hard), and the dealer's upcard.
If all you want to know is "what are my chances of busting on the next hit", that's just (the number of remaining cards that will bust you) / (total remaining cards). This requires not only the player total, but the actual cards. For example, in single deck, if a player has two sevens against a dealer 5, there are 24 bust cards out of the 49 remaining, so you'll bust 24/49 (about 49%) of the time. But if you have a 10 and a 4 (also 14) against a dealer 10, there are only 22 bust cards remaining, for a 45% chance of busting.
This is my first question here, if I did something wrong, tell me...
I'm currently making a draughts game in Java. In fact everything works except the AI.
The AI is at the moment single threaded, using minimax and alpha-beta pruning. This code works, I think, it's just very slow, I can only go 5 deep into my game tree.
I have a function that recieves my mainboard, a depth (starts at 0) and a maxdepth. At this maxdepth it stops, returns the player's value (-1,1 or 0) with the most pieces on the board and ends the recursive call.
If maxdepth isn't reached yet, I calculate all the possible moves, I execute them one by one, storing my changes to the mainboard in someway.
I also use alpha-beta pruning, e.g. when I found a move that can make the player win I don't bother about the next possible moves.
I calculate the next set of moves from that mainboard state recursively. I undo those changes (from point 2) when coming out of the recursive call. I store the values returned by those recursive calls and use minimax on those.
That's the situation, now I have some questions.
I'd like to go deeper into my game tree, thus I have to diminish the time it takes to calculate moves.
Is it normal that the values of the possible moves of the AI (e.g. the moves that the AI can choose between) are always 0? Or will this change if I can go deeper into the recursion? Since at this moment I can only go 5 deep (maxdepth) into my recursion because otherwise it takes way too long.
I don't know if it's usefull, but how I can convert this recursion into a multithreaded recursion. I think this can divide the working time by some value...
Can someone help me with this please?
1. Is it normal that the values of the possible moves of the AI (e.g. the moves that the AI can choose between) are always 0?
Sounds strange to me. If the number of possible moves is 0, then that player can't play his turn. This shouldn't be very common, or have I misunderstood something?
If the value you're referring to represents the "score" of that move, then obviously "always 0" would indicate that all move are equally good, which obviously doesn't make a very good AI algorithm.
2. I don't know if it's usefull, but how I can convert this recursion into a multithreaded recursion. I think this can divide the working time by some value...
I'm sure it would be very useful, especially considering that most machines have several cores these days.
What makes it complicated is your "try a move, record it, undo it, try next move" approach. This indicates that you're working with a mutable data structure, which makes it extremely complicated to paralellize the algorithm.
If I were you, I would let the bord / game state be represented by an immutable data structure. You could then let each recursive call be treated as a separate task, and use a pool of threads to process them. You would get close to maximum utilization of the CPU(s) and at the same time simplify the code considerably (by removing the whole restore-to-previous-state code).
Assuming you do indeed have several cores on your machine, this could potentially allow you to go deeper in the tree.
I would strongly recommend reading this book:
One Jump Ahead: Computer Perfection At Checkers
It will give you a deep history about computer AI in the game of Checkers and will probably given you some help with your evaluation function.
Instead of having an evaluation function that just gives 1/0/-1 for differing pieces, give a score of 100 for every regular piece and 200 for a king. Then give bonuses for piece structures. For instance, if my pieces form a safe structure that can't be captured, then I get a bonus. If my piece is all alone in the middle of the board, then I get a negative bonus. It is this richness of features for piece configurations that will allow your program to play well. The final score is the difference in the evaluation for both players.
Also, you shouldn't stop your search at a uniform depth. A quiescence search extends search until the board is "quiet". In the case of Checkers, this means that there are no forced captures on the board. If you don't do this, your program will play extremely poorly.
As others have suggested, transposition tables will do a great job of reducing the size of your search tree, although the program will run slightly slower. I would also recommend the history heuristic, which is easy to program and will greatly improve the ordering of moves in the tree. (Google history heuristic for more information on this.)
Finally, the representation of your board can make a big difference. Fast implementations of search do not make copies of the board each time a move is applied, instead they try to quickly modify the board to apply and undo moves.
(I assume by draughts you mean what we would call checkers here in the States.)
I'm not sure if I understand your scoring system inside the game tree. Are you scoring by saying, "Position scores 1 point if player has more pieces than the opponent, -1 point is player has fewer pieces, 0 points if they have the same number of pieces?"
If so, then your algorithm might just be capture averse for the first five moves, or things are working out so that all captures are balanced. I'm not deeply familiar with checkers, but it doesn't seem impossible that this is so for only five moves into the game. And if it's only 5 plies (where a ply is one player's move, rather than a complete set of opposing moves) maybe its not unusual at all.
You might want to test this by feeding in a board position where you know absolutely the right answer, perhaps something with only two checkers on the board with one in a position to capture.
As a matter of general principle, though, the board evaluation function doesn't make a lot of sense-- it ignores the difference between a piece and a crowned piece, and it treats a three piece advantage the same as a one piece advantage.
Just wondering if anyone could help me out with some code that I'm currently working on for uni. It's a sliding tile puzzle that I'm coding and I've implemented an A* algorithm with a Manhattan distance heuristic. At the moment the time for it to solve the puzzle can range from a few hundered milliseconds to up to about 12 seconds for some configurations. What I was wanting to know is if this range in time is what I should be expecting?
I've never really done any AI before and I'm having to learn this on the fly, so any help would be appreciated.
What i was wanting to know is if this range in time is what i should be expecting?
That's a little hard to figure out just from the information you've provided. It would help if you could describe how you implemented A*, or if you profiled your application and needed help with specific areas that were slow.
One thing to note that'd probably speed up your average solution time: Half of the starting positions of any n-tile puzzle can never lead to a solution, so you can immediately exclude certain configurations very quickly. For example, you cannot solve an 8-tile puzzle that looks like this:
1 2 3
4 5 6
8 7 .
To see why, note that because the blank space has to wind up back where it started, the overall number of "up"/"down" moves must be equal, as does the overall number of "left"/"right" moves. That means that the overall number of moves must be even.
But the 7/8 transposition here is one move off from the starting puzzle, without changing the blank position! So this puzzle can't be solved. (However, if we made two transpositions, then it'd be solvable again.)
Like you should know you cannot expect any general time. It depends everytime on the code itself especially in which deap your implementation walkes down the tree and also if your code can use the advantages for processor features.
For debugging I would save or print out (but this takes time!) in which level of your tree you are.
Also remember that the weights are very important. E.g.:
123
4 6 <- your final state
789
213 1 3
To change 4 6 is much more expensive than 426
789 789
I hope that helps.
Obviously, this depends not only on your hardware, but on your implementation.
It's not a good measure of performance, though: What you want to do is determine the effective branching factor of your heuristic, vs the actual branching factor of some other non-heuristic approach.
I don't want to say too much more, since this is a homework problem, but if memory serves, Russel and Norvig conver this in the context of the sliding puzzle itself... chapter three, perhaps? (My R+N is not at hand.)