How to calculate time and space complexity of algorithms [closed] - java

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 12 years ago.
How to calculate space and time complexity of algorithms in java.
Is the total time for execution [ Using System.nanoTime() ] equals time complexity of any alogrithm or function ?
Example : Space and Time Complexity estimate of nth number in fibonacci series

Time complexity is a theoretical indication of scalability on an idealised machine. (its about the algorithim, not the machine)
System.nanoTime() will tell you how long something took on a specific machine, in a specific state for specific data inputs.
Time complexity is better for working out worst case values, measuring is more useful if you have a specific use case you want to consider.

Is the total time for execution [ Using System.nanoTime() ] equals time complexity of any alogrithm or function ?
No. When calculating the complexity order of a program, it is usually done in the Big-O notation. Here is everything you need to know about it. With examples.

Firstly you must define basic operation of the algorithm. Put a counter to calculate how many times basic operation works till your algorithm finihed working. Try to denote this counter as n.
In fibonacci series, basic operation is addition(adding last two elements give you the next)
To calculate nth number, n-1 addition must be done. So, complexity of fibonacci series is realized as O(n)

Related

Finding big O time complexity function using only run time data [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
In a project I have for my algorithm class we have to run 5 different sorting methods of unknown types and gather running time data for each of them using the doubling method for the problem size. We then have to use the ratio of the running times to calculate what the time complexity functions. The sorting methods used are selection sort, insertion sort, merge sort, and quicksort (randomized and non-randomized). We have to use empirical analysis to determine which type of sorting method is used in the five unknown methods in the program. My question is how does one go from the ratio to the function. I know that N = 2^k so we can use log(base2)ratio = k but I am not sure how that correlates with the time complexity of say mergesort which is O(N * log N).
The Big-O notation more or less describes a function, where the input N is the size of the collection, and the output is how much time will be taken. I would suggest benchmarking your algorithms by running a variety of sample input sizes, and then collecting the running times. For example, for selection sort you might collect this data:
N | running time (ms)
1000 | 0.1
10000 | 10
100000 | 1000
1000000 | 100000
If you plot this, using a tool like R or Matlab, or maybe Excel if you are feeling lazy, you will see that the running time varies with the square of the sample size N. That is, multiplying the sample size by 10 results in a 100-fold increase in running time. This is O(N^2) behavior.
For the other algorithms, you may collect similar benchmark data, and also create plots.
Note that you have to keep in mind things like startup time which Java can take to begin running your actual code. The way to deal with this is to take many data points. Overall, linear, logarithmic, etc. behavior should still be discernible.
On a log-log graph (log of size vs log of running time) you will find that O(n^k) is a line of slope k. That will let you tell O(n) from O(n^2) very easily.
To tell O(n) from O(n log(n)) just graph f(n)/n vs log(n). A function that is O(n) will look like a horizontal line, and a function that is O(n log(n)) will look like a line with slope 1.
Don't forget to throw both ordered and unordered data at your methods.
you can just look at the growth by time;
If linear (O(n)), doubling the input space just doubles the time, t -> 2t
If quasi-linear(O(nlogn)), doubling the space just increase by 2n(log2n), t->2t(log2t)
If quadratic (O(n^2)), doubling the input space just quadratically,t -> 4t^2
Note that the timings are theoretical. expect the values around some threshold.

Java: What is the case complexity excellent, average and worst of this algorithm? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I want to know the complexity of this algorithm. In my case both in good, medium and worst is O(n^2)
public char getModa(char[] a){
int ii[] =new int[a.length];
char[] t= new char[a.length];
for(int i=0;i<a.length;i++){
for(int j=0;j<a.length;j++){
if(a[j]==a[i]){
ii[i]++;
t[i]=a[j];
}
}
}
int cc=0;
for(int i=0;i<ii.length;i++){
if(ii[i]>ii[cc]) cc=i;
}
return a[cc];
}
The complexity of all cases (best/average/worst) is O(n^2), the bottle-neck is the double iteration over (i,j) in range([0,a.length),[0,a.length)).
To clarify why it is indeed O(n^2) and not O(n^3) as one might think - because the last loop is not 'nested' in side the bottle-neck, so the complexity of the 3 loops is basically O(n^2+n), but since O(n^2+n) = O(n^2), this is the answer.
In fact - there is no variation at all, for the same length arrays - the algorithm has the same number of iterations, regardless of what the input exactly is.
The code inside the nested for loops runs (a.length)^2 times and the code inside the the last for loop runs ii.length = a.length times. So, we have (a.length)^2 + a.length iterations in total, resulting in O(a.length^2).
The amount of iterations (and as such, the complexity) only depends on the size of a, not the values in it. As such, it's the same in both best and worst case scenarios.
You complexity is O(N^2)
Similar problem may be solved in O(N) complexity if use Hash Table.
The for loops are directly dependent on the length of the array a.
So it O(n*n) in all the cases.
The last for loop does not affect the complexity of the program, Since it is already O(n*n)

How do I write a Java program that calculates the factors of a number? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I need help thinking up a formula for finding the factors of a number:
Write a method named printFactors that accepts an integer as its
parameter and uses a fencepost loop to print the factors of that
number, separated by the word " and ". For example, the number 24's
factors should print as:
1 and 2 and 3 and 4 and 6 and 8 and 12 and 24
You may assume that the number parameter's value is greater than 0.
Please don't give me a COMPLETE program as I would like to try it out myself.
The current code I have has a for loop to control the number of "and's" appearing, however, I printed out the last number by itself since I don't want a "24 and" attached to it...
So the output looks something like this at the moment:
"1 and 2 and 3" (I haven't yet thought up the equation hence the 1,2,3...)
I'm currently thinking that the factors requires a % kind of formula right? Will I need division? I was also thinking of printing out 1 and whatever the number (in this case, 24) you are finding factors for, since 1 and the number itself are always factors of itself.
What else am I missing??
Thanks in advance!! :)
I'm currently thinking that the factors requires a % kind of formula right?
Yes.
I was also thinking of printing out 1 and whatever the number (in this case, 24) you are finding factors for, since 1 and the number itself are always factors of itself.
If you test every number from 1 to n (e.g. from 1 to 24) then 1 and the number itself don't need to be special cases (because they'll simply satisfy your ordinary "% kind of formula").
Maybe 1 is a special case because it doesn't have the word "and" in front of it.
What else am I missing??
This may be more complicated than you want, but to find all the factors of n you only need to loop up to the square root of n.

What happens if a binary search on a non-sorted data set is attempted? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
What happens if a binary search on a non-sorted data set is attempted?
The results are unpredicable. If the data set has the target, it may or may not be found.
EDIT Just for kicks, I ran a little experiment. First, I picked an array size and generated an int array {0, 1, ..., size-1}. Then I shuffled the array, did a binary search for each value 0, 1, ..., size-1 and counted how many of these were found. For each size, I repeated the shuffle/search-for-each-value steps 100,000 times and recorded the percent of searches that succeeded. (This would be 100% for a sorted array.) The results are (rounded to the nearest percent):
Size % Hit
10 34%
20 22%
30 16%
40 13%
50 11%
60 10%
70 9%
80 8%
90 7%
100 6%
So the larger the array, the worse the effects of not sorting. Even for relatively small arrays, the results are pretty drastic.
You will almost certainly not find the element you've been searching for. If the array is mostly sorted, then you could get lucky.
The algorithm could be implemented in a way to detect this with some probability, but unless it does a full scan of the array, there's no way to guarantee that a binary search will detect this error condition.
Binary Search relies on data being sorted. It picks an element in an array and determines
1. If this is the element it is searching
2. If it is not the element it is looking for, where can it possibly find the element.
The second point relies on data being sorted to make a decision. Imagine an unsorted data. Just by comparing the search key with the element that we have picked, we will not be able to identify where the element could occur.
So, binary search cannot work consistently in unsorted data.
Binary Search is meant to work on a Sorted Array, if its done on a Non-Sorted Array, then the result will surely be unpredictable and unreliable.

Array handling Algorithms [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
Given an array of integers, give two integers from the array whose addition gives a number N.
This was recently covered on the ihas1337code blog. See the comments section for solutions.
Essentially the most efficient way to solve this is to put the numbers in a hash_map and then loop through the array a second time checking each element x if element (N - x) exists in the hash_map.
You can optimize a bit from there, but that is the general idea.
Follow these steps:
1.Sort the numbers using merge sort in O(n logn) in descending order(can be ascending also but for this text assumed them to be sorted in desceending order).
2.Use two pointer variables one pointing to starting element (say p1) and other pointing to last element( say p2).
3.Now add *p1 + *p2 ( temp_sum= *p1 + *p2 ) and compare it with required sum
Repeat these steps untill p1>p2
i.If sum ==temp_sum then our job is over .
ii.If sum > temp_sum then decrease p2 to make it point to a bigger value then its current value so that temp_sum can increase.
iii.If sum < temp_sum then decrease p1 to make it point to a smaller value then its current value so that temp_sum can decrease.

Categories