What Java class can be used to calculate runtime? - java

I'm trying to write a java program that generates a million random numbers, and then use Bubble Sort, Insertion Sort and Merge Sort to sort them. Finally I want to display the runtime in nanoseconds of each sorting algorithm. Is there a class in Java that allows me to do so?

You can use System.nanoTime() to measure the time spent in your code. (Use the time difference between the start and end of the code under test.) Be aware, however, that the actual time measurement probably does not have nanosecond resolution.
You might want to look into a good Java benchmarking framework for measuring your code's performance. A general web search will turn up quite a few good candidates. Doing timing tests is not at all an easy thing to get right.

Related

Cyclomatic Complexity in Intellij

I was working on an assignment today that basically asked us to write a Java program that checks if HTML syntax is valid in a text file. Pretty simple assignment, I did it very quickly, but in doing it so quickly I made it very convoluted (lots of loops and if statements). I know I can make it a lot simpler, and I will before turning it in, but Amid my procrastination, I started downloading plugins and seeing what information they could give me.
I downloaded two in particular that I'm curious about - CodeMetrics and MetricsReloaded. I was wondering what exactly these numbers that it generates correlate to. I saw one post that was semi-similar, and I read it as well as the linked articles, but I'm still having some trouble understanding a couple of things. Namely, what the first two columns (CogC and ev(G)), as well as some more clarification on the other two (iv(G) and v(G)), mean.
MetricsReloaded Method Metrics:
MetricsReloaded Class Metrics:
These previous numbers are from MetricsReloaded, but this other application, CodeMetrics, which also calculates cyclomatic complexity gives slightly different numbers. I was wondering how these numbers correlate and if someone could just give a brief general explanation of all this.
CodeMetrics Analysis Results:
My final question is about time complexity. My understanding of Cyclomatic complexity is that it is the number of possible paths of execution and that it is determined by the number of conditionals and how they are nested. It doesn't seem like it would, but does this correlate in any way to time complexity? And if so, is there a conversion between them that can be easily done? If not, is there a way in either of these plug-ins (or any other in IntelliJ) that can automate time complexity calculations?

different types of sorting and their time

I know that as number of elements is doubled the time to sort for selection sort and insertion sort quadrupled.
How about merge sort and quick sort?
Lets say it makes 2 seconds to sort 100 items using merge sort.
How long would it take to sort 200 items using merge sort and quick sort?
Merge Sort is usually O(nlog(n)). Quick sort can be O(nlog(n)), but worst case scenario it will end up closer to O(n^2). I'll leave the math to you, as it's fairly simple. The nice thing about common sorting algorithms is that they are very well documented online, there are likely plenty of calculators online that could give you specifics. As for how long it would actually take to run an algorithm, I'm no expert but I'm guessing that would depend largely on hardware. You should be more concerned with the Big-O of whatever you're running because that's the only thing you can really control as a programmer.

Sensibility of converting Matlab program in Java to improve performance

His
I have a somewhat hypothetical question. We've just programmed some code implementing genetic algorithm to find a solution to a sudoku game as part of the Computational Intelligence course project. Unfortunately it runs very slowly which limits our ability to perform adequate number of runs to find the optimal parameters. The question is whether reprogramming the whole thing - the code basis is not that big - into java would be a viable solution to boost up the speed of the software. Like we need 10x performance improvement really and i am doubtful that a Java version would be so much snappier. Any thoughts?
Thanks
=== Update 1 ===
Here is the code of the function that is computationally most expensive. It's a GA fitness function, that iterates through the population (different sudoku boards) and computes for each row and column how many elements are duplicates. The parameter n is passed, and is currently set to 9. That is, the function computes how many elements a row has that come up within the range 1 to 9 more then once. The higher the number the less is the fitness of the board, meaning that it is a weak candidate for the next generation.
The profiler reports that the two lines calling intersect in the for loops causing the poor performance, but we don't know how to really optimize the code. It follows below:
function [fitness, finished,d, threshold]=fitness(population_, n)
finished=false;
threshold=false;
V=ones(n,1);
d=zeros(size(population_,2),1);
s=[1:1:n];
for z=1:size(population_,2)
board=population_{z};
t=0;
l=0;
for i=1:n
l=l+n-length(intersect(s,board(:,i)'));
t=t+n-length(intersect(s,board(i,:)));
end
k=sum(abs(board*V-t));
f=t+l+k/50;
if t==2 &&l==2
threshold=true;
end
if f==0
finished=true;
else
fitness(z)=1/f;
d(z)=f;
end
end
end
=== Update 2 ===
Found a solution here: http://www.mathworks.com/matlabcentral/answers/112771-how-to-optimize-the-following-function
Using histc(V, 1:9), it's much faster :)
This is rather impossible to say without viewing your code, knowing if you use parallelization, etc. Indeed, as MrAzzaman says, profiling is the first thing to do. If you find a single bottleneck, especially if it is loop-heavy, it might be sufficient to write that part in C and connect it to Matlab via MEX.
In genetics algorithms, I'd believe that a 10x speed increase could be obtained rather than not. I do not quite agree with MrAzzaman here - in some cases (for loops, working with dynamic objects) is much, much slower than C/C++/Java. That is not to say that Matlab is always slow, for it is not, but there is plenty of algorithms where it would be slow.
I.e., I'd say that if you don't spend so much time looping over things, don't use objects, are not limited by Matlab's data structures, you might be ok with Matlab. That said, if I was to write GAs in Java or Matlab, I'd rather pick the former (and I'm using Matlab a lot more than Java these days, it's not just a matter of habit).
Btw. if you don't want to program it yourself, have a look at JGAP, it's a rather useful Java library for GAs.
OK, the first step is just to write a faster MATLAB function. Save the new languages for later.
I'm going to make the assumption that the board is full of valid guesses: that is, each entry is in [1, 9]. Now, what we're really looking for are duplicate entries in each row/column. To find duplicates, we sort. On a sorted row, if any element is equal to its neighbor, we have a duplicate. In MATLAB, the diff function does sliding pairwise differencing, and a zero in its output means that two neighboring values are equal. Both sort and diff operate on entire matrices, so no need for looping. Here's the code for the columnwise check:
l=sum(sum(diff(sort(board)) == 0));
The rowwise check is exactly the same, just using the transpose. Now let's put that in a test harness to compare results and timing with the previous version:
n = 9;
% Generate a test board. Random integers numbers from 1:n
board = randi(n, n);
s = 1:n;
K=1000; % number of iterations to use for timing
% Repeat current code for comparison
tic
for k=1:K
t=0;
l=0;
for i=1:n
l=l+n-length(intersect(s,board(:,i)'));
t=t+n-length(intersect(s,board(i,:)));
end
end
toc
% New code based on sort/diff for finding repeated values
tic
for k=1:K
l2=sum(sum(diff(sort(board)) == 0));
t2=sum(sum(diff(sort(board.')) == 0));
end
toc
% Check that reported values match
disp([l l2])
disp([t t2])
I encourage you to break down the sort/diff/sum code, and build it up on a sample board right at the command line, and try to understand exactly how it works.
On my system, the new code is about 330x faster.
For traditional GA applications for studying and research purposes it is better to use a native machine compiled source code programming language, like C, C++. Which I used when working with Genetic
Programming in the past and it is really fast.
However if you are planning to put this inside a more modern type of application that can be deployed in a web container or run in a mobile device, different OS, etc. Then Java is your best alternative as it is platform independent.
Another thing that can be important is about concurrency. For example lets us suppose that you want to put your GA in the Internet and you will have a growing number of users that are connected concurrently and all of them want to solve a different sudoku, Java applications are very good for scaling horizontally and works great with big number of concurrent connections.
Other thing that can be good if you migrate to Java is the number of libraries and frameworks that you can use, the Java universe is so big that you can find useful tools for almost any kind of application.
Java is a Virtual Machine compiled language, but it is important to note that currently the JVMs are very good in performance and are able to optimize the programs, for example they will find which methods are being more heavily used and compile them to native code, which means that for some applications you will find a Java program to be almost same fast than a native compiled from C.
Matlab is a platform that is very useful for engineering training and math, vector, matrix based calculations, also for some control stuff with Simulink. I used these products when in my electrical engineering bachelor, however those product's goal is to be mainly a tool for academic purposes I won't definitely go for Matlab if I am wanting to build a production application for the real world. It is not scalable, it is expensive to maintain and fine-tune it, also there are not lot of infrastructure providers that will support this kind of technology.
About the complexity of rewriting your code to Java, the Matlab code and Java code syntax is pretty similar, they also live in the same paradigm: Procedural OOP, even if you are not using OO in your code it can be easy rewritten in Java, the painful stuff will be when working with Matlab shortcuts to Math structures like matrix and passing functions as parameters.
For the matrix stuff, there are lot of java libraries like EJML that will make your life easier. About assigning functions to variables and then pass them as parameters to another functions, Java is not currently able to do that (Java 8 will be with Lambda Expressions) but you can have a equivalent functionality by using Class closures. Maybe these will be the only little painful things that you will find if migrating.
Found a solution here: http://www.mathworks.com/matlabcentral/answers/112771-how-to-optimize-the-following-function
Using histc(V, 1:9), it's much faster :)

Mergesort running faster on larger inputs

I'm working on an empirical analysis of merge sort (sorting strings) for school, and I've run into a strange phenomenon that I can't explain or find an explanation of. When I run my code, I capture the running time using the built in system.nanotime() method, and for some reason at a certain input size, it actually takes less time to execute the sort routine than with a smaller input size.
My algorithm is just a basic merge sort, and my test code is simple too:
//Get current system time
long start = System.nanoTime();
//Perform mergesort procedure
a = q.sort(a);
//Calculate total elapsed sort time
long time = System.nanoTime()-start;
The output I got for elapsed time when sorting 900 strings was: 3928492ns
For 1300 strings it was: 3541923ns
With both of those being the average of about 20 trials, so it's pretty consistent. After 1300 strings, the execution time continues to grow as expected. I'm thinking there might be some peak input size where this phenomenon is most noticeable.
So my Question: What might be causing this sudden increase in speed of the program? I was thinking there might be some sort of optimization going on with arrays holding larger amounts of data, although 1300 items in an array is hardly large.
Some info:
Compiler: Java version 1.7.0_07
Algorithm: Basic recursive merge sort (using arrays)
Input type: Strings 6-10 characters long, shuffled (random order)
Am I missing anything?
Am I missing anything?
You're trying to do a microbenchmark, but the code you've posted so far does not resemble a well working sample. To do so, please follow the rules stated here: How do I write a correct micro-benchmark in Java?.
The explanation about your code being faster is because after some iterations of your method, the JIT will trigger and the performance of your code will be optimized, thus your code getting faster, even when processing larger data.
Some recommendations:
Use several array/list inputs with different size. Good values to do this kind of analysis are 100, 1000 (1k), 10000 (10k), 100000 (100k), 1000000 (1m) and random size values between these. You will get more accurate results when performing evaluations that take longer time.
Use arrays/lists of different objects. Create a POJO and make it implement the Comparable interface, then execute your sort method. As explained above, use different arrays values.
Not directly related to your question, but the execution results are based on the JDK used. Eclipse is just an IDE and can work with different JDK versions, e.g. at my workplace I use JDK 6 u30 to work on projects on the company, but for personal projects (like proof of concepts) I use JDK 7 u40.

java frameworks complexity statistics

It is extremely difficult to illustrate the complexity of frameworks (hibernate, spring, apache-commons, ...)
The only thing I could think of was to compare the file sizes of the jar libraries or even better, the number of classes contained in the jar files.
Of course this is not a mathematical sound proof of complexity. But at least it should make clear that some frameworks are lightweight compared to others.
Of course it would take quiet some time to calculate statistics. In an attempt to save time I was wondering if perhaps somebody did so already ?
EDIT:
Yes, there are a lot of tools to calculate the complexity of individual methods and classes. But this question is about third party jar files.
Also please note that 40% of phrases in my original question stress the fact that everybody is well aware of the fact that complexity is hard to measure and that file size and nr of classes may indeed not be sufficient. So, it is not necessary to elaborate on this any further.
There are tools out there that can measure the complexity of code. However this is more of a psychological question as you cannot mathematically define the term 'complex code'. And obviously giving two random persons some piece of code will give you very different answers.
In general the issue with complexity arises from the fact that a human brain cannot process more than a certain number of lines of code simultaneously (actually functional pieces, but normal lines of code should be exactly that). The exact number of lines that one can hold and understand in memory at the same time of course varies based on many factors (including time of day, day of the week and status of your coffee machine) and therefore completely depends on the audience. However less number of lines of code that you have to keep in your 'internal memory register' for one task is better, therefore this should be the general factor when trying to determine the complexity of an API.
There is however a pitfall with this way of calculating complexity, as many APIs offer you a fast way of solving a problem (easy entry level), but this solution later turns out to cause several very complex coding decisions, that on overall makes your code very difficult to understand. In contrast other APIs require you to do a very complex setup that is hard to understand at first, but the rest of your code will be extremely easy because of that initial setup.
Therefore a good way of measuring API complexity is to define a task to solve by that API that is representative and big enough, and then measure the average amount of simultaneous lines of code one has to keep in mind to implement that task.And once you're done, please publish the result in a scientific paper of your choice. ;)

Categories