Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I am trying to find a way to calculate the strength of a hand, specifically how many unique hands (of the 169 starting hands) have a higher chance of winning at showdown given a complete or incomplete board (holdem).
I've tried to do this a few ways and have been somewhat successful but it takes an obnoxious amount of time for my program to run given that i'm essentially branching out for every possible hand combo, and comparing aggregate results for every scenario to find how many hands are better than hero's. TLDR it's terribly inefficient and it takes an unrealistic amount of time to run.
However there are tools like this one http://www.cardplayer.com/poker-tools/odds-calculator/texas-holdem that seem to do the calculation a lot faster. The above program seems to do calculations for all possible future board combinations, so it can give you the better hand for incomplete boards. Although, for my particular program i'd like to find the number of hands ahead of hero's at any given point, which would require me to run the program above for each of the 52*51 non-unique starting hands, and find my hand's place among the rest and once the number of better hands have been gotten, i'll have to reduce those to unqique starting hands (ie 8c7h and 8h7c would be reduced to 87o)
So my question is, are there any tools/frameworks/references (preferably in Java) out there for calculating the strength of hero's hand vs an anonymous hand given any complete or incomplete board that also doesn't take a day to run?
I am not much of a poker kind of guy, but you may find ThePokerBank site interesting, what about a whole course dedicated at poker theory from MIT, a bonus infographic to help you out too.
There are different strategies that you can take to try to tackle this issue, all of them involving quite some knowledge on Statiscal analysis, I would say that one of the reason other poker algorithm work a bit better is that they are using a form of vectorization math instead of a series of for loop. I know that language like octave/MatLab/R take this strategy to do bulk operation.
Good luck and have fun!!
This thread has much information Stack Overflow Evaluation Algorithms
Also at Code Project and a tutorial on an algorithm and Java source: at Github and in different languages at rosettacode.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'm planning to learn Big Data. I just have gone through tutorials but I'm a little bit confused what the modules are that I need to concentrate on from a developer perspective. Presently I'm working on java. I hope your response will be helpful for the next step of my Big Data journey.
First I'd propose to get familiar with the term, Big Data is a bit fluffy and debated one, more a marketing catchphrase than a technical specification, covering a huge range of technology.
Starting from that I'd try to determine which aspect (IoT, build/run datacenters, etl/data integration/warehousing, analytics/statistics/machine learning...) or perhaps which field of application (retail, bioinformatics...) you're interested in, and which is reasonable to access from an employment point of view. I'd think also about the tech stack you'd like to work on (Scala, Python...).
Reverse engineering job offers could be a way to get to that information actually.
The Data Scientist profile (etl + machine learing + visualization) gained broad acceptance and encompasses certain skill sets, Big Data Analyst and Bid Data Engineer also can be found, arguably with a not so well defined profile.
Nowadays one can get whole MSCs in data science (here's a personal evaluation of it), but perhaps you can get your foot into the door on a less fancy route too. Trainigs may come in varying quality, I found Andy Ngs machine learning and deep learing (big neural networks) MOOCs stunning, and everything coming from the EPFL-Scala side (if you want to go down that road) is technically superior and from the presentation ok (I tried Big Data Analysis with Scala and Spark).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Hey guys,
I'm a relatively new programmer in Java(and in general), but I want to know different ways of minimizing memory and RAM usage in programs that I make. I've heard of a few such as StringBuilder as an alternative to + String concatenation and stuff, but I'd like to hear what you guys know about how to maximize performance and why.
Thanks in advance!
In modern programming, it is a far better use of time your time to focus on making your code readable rather than trying to micro-optimse.
Modern compilers do an extremely impressive job of these small optimisations so that everyday programmers don't have to deal with them, and in the majority of cases it is better to leave it up to the compiler than to attempt it yourself.
In general I would say that the largest performance improvements can be gained by thinking about the design of your program ahead of time, before you even start typing. Once you've already bashed out 10,000+ lines of code implementing your latest 3D high-performance MMORPG, and you realise it's not as high-performance as you were hoping, making any drastic design changes will be considerable work. Some things to think about beforehand are:
Think about your algorithms complexity, for example string concatenation can be O(n^2) using String, but O(n) using StringBuilder.
Use object pools to reuse memory rather than creating new instances each time
Reuse existing library implementations of data-structures etc, rather than trying to recreate them yourself. Many more man-hours will have been put into these implementations than you could possibly spend on them, and so they are likely to be more efficient/robust
Finally I should mention, that if you do go trying to optimise some existing code because its not performing as well as needed, it's very important to know specifically where the problem area is. In this case a profiler is invaluable, and should help pinpoint any particular areas that are affecting performance. They might not be where you expect!
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Ok I gave been playing around with java for a year now an I can say that is is in my power to write a fully functional program.
A month ago I started studying vectors and so, when I tempted to implementing them into the LWJGL I realized that Java is not fast enough for the level of graphics that I wish to generate.
Now that is my problem and I have decided that I must learn a stronger Language but where do I begin I have tinkered around in C/C++ before but it kills my ambition to go and start over after already using a hole year.
my "algors" for rendering a vector are:
z = r(cos t+j sin t) //where the t is degrees and r is its length
(for the curious)
the program the continues a loop that alters its length with 1, gets its end X and Y and draws a pixel on that spot.
Not for game coding.
Name 1 free E-Book that will get Me on My feet with C
I realized that Java is not fast enough for the level of graphics that I wish to generate.
Java is plenty fast. Unless you want to work with high performance graphics, you can use it safely (and you will probably be able to use it even with high performance graphics).
In the vast majority of cases, the speed of your application will be a function of algorithmic complexity, not language used.
After your application is completed, if it doesn't run fast enough, you can optimize. If it still doesn't run fast enough, you can implement critical parts in C/C++/your-language-here.
If you start from "the language is not fast enough", you're already doing it wrong.
If you want to write a game, use an engine. There are many great engines in many languages. If you wonder about language-inherent performance, and you try to solve technological issues yourself, you are gonna be stuck doing that forever.
Do not reinvent the wheel. Stop worrying about which programming language is the best fit, and rather think about which engine is best suited for what you want to make.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 9 years ago.
Improve this question
I am currently in the process of going through old programming olympiad questions, and found question 5 rather difficult. The problem is based in some category of graph theory and involves computing the most inexpensive path and visiting all nodes. Full details can be found here: problem
Would it be suitable to use A* search algorithm? What kind of algorithm would you use to solve the problem, which is fast to implement and can solve the problem in the given time period?
As #kiheru stated, A* won't work.
This is the traveling salesman problem, and it's an NP Complete problem. Replace tolls for distance traveled, and you get the same problem. The Traveling Salesman link has several of such algorithms.
Traveling Salesman
You'll find different algorithms depending on the number of cities, but it gets much more computationally expensive as you add cities to where a computer isn't the best choice for an exact solution. There are many different techniques for getting an approximation, but it's not a solvable problem.
If I were to code it, I'd use something called Linguistic Geometry (something I learned in grad school). Basically you treat the nodes as a game board, and you take one step at a time towards the answer you want and evaluate it. This won't solve it, but it will give you a good approximation in a very short amount of time.
This is known as the travelling salesman problem, and is NP-Complete. That means there is no generally-faster method of solving this problem than brute-forcing (well, there is actually a O(2^n*n^2) solution based on dynamic programming). Since you are dealing with only 6 nodes, which is 6! = 720 total possible paths to check, the simplest solution would be to just try every different ordering of cities and record which is fastest.
(Also, contrary to #kiheru's comment above, A* is not a heuristic. It uses a heuristic, but still finds an exact solution to the shortest-path problem. However, either way it does not apply to your problem)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I'm looking for a tool to count source lines of code for Java as well as giving an estimate of the number of man-years invested in the code. Since Java code tends to be more verbose than other languages, with a lot of boilerplate code (anemic beans) generated by the IDE, I want the tool's metric to take this into account.
If someone can just provide the formula to convert source line count to man years (for java), this is also good.
This sounds like a really bad idea.
The best way to estimate the number of man years work on a piece of code it to look at who worked on it and how long.
Trying to infer this man years from SLOC is likely to be highly inaccurate and misleading. For example:
At some point in the software lifecycle many lines of code can be added. In some periods of maintenance / refactoring code may be actually taken away.
Code that has had a lot of requirements changes and quick hacks is likely to have more SLOC than equivalent code that was cleanly designed and written in the first place.
The same functionality can be written with 100 lines or 1000 lines depending on the libraries / frameworks used.
Are you going to count SLOC in libraries too? What about the JVM? What about the underlying OS?
In short, any estimate of man years derived from SLOC is likely to be pretty meaningless.
Although you want the information for bad purposes SLOC is a nice, easy, not very useful metric. Make sure you read this older conversation first
One of my most productive days was throwing away 1000 lines of
code.(Kent beck).
It is not going to be accurate for various reasons. Some from my experience ..
Code gets added , changed or deleted: If you really want query your
SCM for change history and then map to changed lines.
Architectural changes/Introducing a library replacing your code. : In
our case it reduced Coding only part of the change: Design
discussions, client interactions, documentation etc will not be
reflected in code, even though I consider they are development effort
Finally developers are f varying productivity (1 : 40 , some said):
How are you going to map into developer time?
SLOC is a useful tool to say my code base it 'this large' or 'this small'..
Looks like http://www.dwheeler.com/sloccount/ is the best bet.
at the office i use ProjectCodeMeter to estimate man-years invested in a source code, it's kind of a luxury tool at that price, but i did use the free trial version at home on occasions :)