Performancetest: Flash/AS3 Processing/Java and openFrameworks/C++ - java

I need to compare the performance of AS3, Processing and openFrameworks for my Bachelor thesis. Are there any comparison tables you know of or do I have to do the test myself?
How would a good test look like? I'm just focused on graphics so I thought about maybe three different programs, a 2d-graphics app, a typographic-app and a 3d-app. Are there any pitfalls? What's the best way to test the performance?
All suggestions are appreciated!

I know in the AS3 world there is a popular performance monitor called stats, you can find it here. Honestly I think you may be comparing apples to oranges. My initial assumption would be that openFrameworks (C++) outperforms Processing (Java) and Processing outperforms AS3 for many of the problems you will be exploring. I am sure there are many Java performance monitors and C++ monitors that you can plug into your Processing and openFramework programs to collect the data you need or you can roll your own.
Of course you also need to identify what exactly you will be testing. My initial thought would be to test framerates, memory consumption, CPU utilization, and execution time. Personally I like to develop particle emitters and push my programs to the limit on the number of particles it can process. You will quickly see that Processing and openFramworks kicks AS3's butt with this.
Well I hope I helped.
Have fun!
Nick #
nickgs.com

Related

How do I make a reinforcement learning agent in Java?

I have a challenge that my teacher gave to beat an army of his soldiers on a 18x24 grid, with random obstacles placed on the board. The game is turn based and I have an army of 50 soldiers, each of which needs to either move or attack on their turn.
My problem is I only have access to creating a class of soldiers to fight in this environment. Currently I have a method that evaluates the board position by looking at how many soldiers there are left from each team and does yourTeam - enemyTeam to get the current score, and I have a method that will produce the legal moves for the soldier.
I want to know how I would create a reinforcement learning agent in Java with what I have access to. If you know any ways to do this or any resources that may help that would be great. Thank you for the help!
Java is not a good language for doing math heavy computation (which is what you will need to do for RL). You could attempt to implement the Q-Learning, value-iteration or policy-iteration algorithms but I would avoid doing anything with neural networks/modern deep RL approaches here as your work load will increase dramatically.
With regard to your problem, if you are to implement one of the old-school algorithms. Think about your state and action space. I have serious concerns about the size of your action space, even with a small number of moves for each solider (say 3 - attack, move up, move down) with 50 soldiers the action space will be very large - 50^3, even this many will be difficult to deal with, any more (even 4 or 5) will send you deep into some complex topics in RL.
Other problems are - defining a good reward signal, efficiently running (potentially millions) of simulated games.
The short answer is, this is not something to be taken lightly, it would be challenging and time consuming even for someone who has experience in the field and using Java is a no-no (Python is better). Given you probably don't have long to find a good solution, I would recommend trying a different approach - planning based maybe, or hard coding a reasonable strategy.
If you still want to go ahead and read up on the topic here are some good resources:
Reinforcement Learning an Introduction (Sutton & Barto) - any edition is fine
Selected chapters in Artificial Intelligence: A Modern Approach (Russel & Norvig)
Hope this helps and sorry it may not have been the answer you we hoping for!

Optimization: Oj algorithms (java) versus SCIP (python)

Does anybody know how these 2 solvers, (Oj algorithms) from Java and SCIP for Python, relate to each other performance wise (as in: which one is the fastest), when dealing with a typical MILP (Mixed Integer Linear Programming) problem? On first sight, I can't seem to find anything online that can point me in the right direction, and I'm curious!
Thanks in advance!
The SCIP Optimization Suite is one of the fastest MIP and MINLP solvers available in source code. PySCIPOpt, its interface to Python, might be a bit slower when constructing the model but solving times are still good since it's running the pure SCIP C library in the background.
To be honest, I have no experience with oj! Algorithms and cannot say how good this solver is. Apparently it allows to link to Gurobi or CPLEX, so guess in this case it's mainly a modelling wrapper around those APIs providing high performance.
In the end it comes down to your modelling preferences/requirements and your specific problem instances.

Custom math functions vs. supplied Math functions?

I am basically making a Java program that will have to run a lot of calculations pretty quickly(each frame, aiming for at least 30 f/s). These will mostly be trigonometric and power functions.
The question I'm asking is:
Which is faster: using the already-supplied-by-Java Math functions? Or writing my own functions to run?
The built-in Math functions will be extremely difficult to beat, given that most of them have special JVM magic that makes them use hardware intrinsics. You could conceivably beat some of them by trading away accuracy with a lot of work, but you're very unlikely to beat the Math utilities otherwise.
You will want to use the java.lang.Math functions as most of them run native in the JVM. you can see the source code here.
Lots of very intelligent and well-qualified people have put a lot of effort, over many years, into making the Math functions work as quickly and as accurately as possible. So unless you're smarter than all of them, and have years of free time to spend on this, it's very unlikely that you'll be able to do a better job.
Most of them are native too - they're not actually in Java. So writing faster versions of them in Java is going to be a complete no-go. You're probably best off using a mixture of C and Assembly Language when you come to write your own; and you'll need to know all the quirks of whatever hardware you're going to be running this on.
Moreover, the current implementations have been tested over many years, by the fact that millions of people all around the world are using Java in some way. You're not going to have access to the same body of testers, so your functions will automatically be more error-prone than the standard ones. This is unavoidable.
So are you still thinking about writing your own functions?
If you can bear 1e-15ish relative error (or more like 1e-13ish for pow(double,double)), you can try this, which should be faster than java.lang.Math if you call it a lot : http://sourceforge.net/projects/jafama/
As some said, it's usually hard to beat java.lang.Math in pure Java if you want to keep similar (1-ulp-ish) accuracy, but a little bit less accuracy in double precision is often totally bearable (and still much more accurate than what you would have when computing with floats), and can allow for some noticeable speed-up.
What might be an option is caching the values. If you know you are only going to need a fixed set of values or if you can get away without perfect accuracy then this could save a lot of time. Say if you want to draw a lot of circles pre compute values of sin and cos for each degree. Then use these values when drawing. Most circles will be small enough that you can't see the difference and the small number which are very big can be done using the libraries.
Be sure to test if this is worth it. On my 5 year old macbook I can do a million evaluations of cos a second.

Where can I get large number of rules and facts or how can I generate them for Drools benchmark?

I would like to test Drools performance, such as memory consupmtion and inferencing speed for large amount of data. I did it through running benchmarks that are available on drools projects https://github.com/droolsjbpm/drools just as other example there. There are commonly used benchmarks such as manners, waltz and waltzdb. But on my computer they takes dozen of seconds. Could U suggest me any sources of rules and objects/facts that can I use and test for free with Drools? Maybe it is possible to generate such data and rules? Then how could I do that?
Thanks for help.
It's worth noting that those benchmarks have no purpose whatsoever. They are mostly specifically designed to do things which are inefficient in rules engines. They even have very little value for comparison between engines, given that you're unlikely to ever write a real-world application that is anything like Miss Manners.
If you just want large amounts of data for your tests, there is loads of open data out there. For instance, the UK provides a variety of open data sets. You can pick one which suits your experiment here.
http://data.gov.uk/data/search
Or you could grab a load of gene sequence data from GenBank:
http://www.ncbi.nlm.nih.gov/genbank/
There's loads of free data out there, for which you could write rules.
If you are really looking to benchmark rules engines, then it would probably be better to generate the data yourself. That's the best way to ensure that you get reliable statistical variations.
However, all you will be doing is benchmarking a specific set of rules. Any such benchmarks would be redundant as soon as the rules change.

Concurrently search a game tree using minimax and AB pruning. Is that possible?

I'm going be competing in a board game AI competition at my school and am trying to come up with some ideas for concurrency to gain an edge. I will most likely be at a disadvantage because I will be implementing it in java and I understand c or c++ would be much faster.
It doesn't seem like you could just split the game tree in half because of the move ordering which should leave the best moves first and it seems that it would be difficult or maybe even impossible to communicate the current alpha/beta at a given depth. I'm going to be using transposition tables as well which would need to be synchronized.
Besides searching, is there something that a second thread could be doing which could aid in the search or provide some type of speed increase. Each AI will have 5 seconds to make a move and your program can be working while the opponent is thinking.
Any input, no matter how obscure, would be appreciated.
An overview can be found in the Chess Programming Wiki's parallel search article. Even if your actual game is not chess, many concepts will also apply. The site also covers sophisticated solutions for shared transposition tables.
However, when you don't have much time, I would not start with a parallel search. You are correct that parallelism can increase the strength of the search algorithm. It is very difficult to get it right, though, and the benefits are way lower than one would expect.
If you want to experiment with parallelism, go ahead. It is an interesting topic. However, if you just want to get the best results in a limited amount of time, I would recommend to stick with a sequential search, and instead focus on move ordering and correctness.
It is possible. You have to make communication between threads to have AB prunning help. Also, move ordering must be tweaked, it doesn't help if one thread has the best-rated moves to analyze while the others not.

Categories