I have questions for my assignment.
I need to decide what is the Big-O characterization for this following algorithm:
I'm guessing the answer for Question 1 is O(n) and Question 2 is O(log n), but I kinda confused
how to state the reason. Are my answers correct? And could you explain the reason why the characterization is like that?
Question 1 : O(n) because it increments by constant (1).
first loop O(n) second loop also O(n)
total O(n) + O(n) = O(n)
Question 2 : O(lg n) it's binary search.
it's O(lg n), because problem halves every time.
if the array is size n at first second is n/2 then n/4 ..... 1.
n/2^i = 1 => n = 2^i => i = log(n) .
Yes, your answers are right.The first one is pretty simple. 2 separate for loops. So effectively its O(n).
The second one is actually tricky. You are actually dividing the input size by 2 (half), that would effectively lead to a time complexity of O(log n).
Related
what is the time complexity of the given for loop the complexity of test1() function is O(n) and i think that the time complexity of outer loop is O(logn) so what is the total time complexity i am confused is it be O(nlogn) ?
for(int i=0; i<n;i=i*3){
test1();
}
i = i * 3 is an interesting increment clause; indeed, this means the loop is O(logn). n is not passed to the test1() method which makes it rather unlikely that it has O(n) performance but perhaps you oversimplified the snippet. Assuming test1() is O(n), the entire thing is indeed O(n log n).
Imagine i = i * 10, then i grows: 1, 10, 100, 1000, 10000, 100000 - "count the digits" is exactly what log n does - that's a simple way to figure out that i = i * 3 would have an O(log n) performance characteristic.
NB: I answered earlier and misread i = i * 3 as i += 3 for some silly reason. That would make the loop O(n). #AndrewS fortunately pointed out I needed to have a closer look in a comment.
In java, if I sort using Arrays.sort() (n log n) and then use a for loop o(n) in the code, what will be the new complexity ? is it n^2 log n or n log n
Answer: O(nLog(n))
non nested complexities can be simply added. i.e. O(n) + O(nLog(n))
For large n, nLog(n) is significantly greater than n. Therefore, O(nLog(n)) is the answer.
Read this: https://en.wikipedia.org/wiki/Big_O_notation
Note:
if the complexities are nested then the complexities are multiplied, for example:
Inside a loop of order n, you are doing a sort of order nLog(n).
Then complexity will be O(n * nLog(n)). i.e. O(nĀ²Log(n))
If you perform the for loop after, you have an O(nlog(n)) operation followed by an O(n) one. Since O(n) is negligible compared to O(nlog(n)), your overall complexity would be O(nlog(n)).
Suppose I'm using the following code to reverse print a linked list:
public void reverse(){
reverse(head);
}
private void reverse(Node h){
if(h.next==null){
System.out.print(h.data+" ");
return;
}
reverse(h.next);
System.out.print(h.data+" ");
}
The linkedlist is printed out in the opposite order, but I don't know efficient it is. How would I determine the time complexity of this function? Is there a more effecient way to do this?
Calculating time complexity of recursive algorithms in general is hard. However, there are plenty of resources available. I would start at this stackoverflow question Time complexity of a recursive algorithm.
As far of the time complexity of this this function, it is O(n) because you call reverse n times (once per node). There are not any more efficient ways to reverse, or even print a list. The problem itself requires you to at least look at every element in the list, which by definition is an O(n) operation.
Suppose your list has n elements. Each call to reverse(Node) reduces the length of the list by a single element. The efficiency is therefore O(n), which is clearly optimal: you can't reverse a list without considering all the elements.
You can use recursion tree or just expand T(n).Both are essentially same methods. What you are doing is expanding the recursion function by noting down what it does each time it is called in its stack.
For ex. Each time your function is called, it does some constant time stuff (print data) and then recurses.
So, expanding it, you'll get :
T(n) = d + T(n-1) {since one recursion is done, so one less to go}
= d + d + T(n-2)
and it will go on until it fizzes out.So your function will go on upto the length of the list.Hence complexity : O(n)
check out this : Time complexity of a recursive algorithm
I have currently learned the code of all sorting algorithms used and understood their functioning. However as a part of these, one should also be capable to find the time and space complexity. I have seen people just looking at the loops and deriving the complexity. Can someone guide me towards the best practice for achieving this. The given example code is for "Shell sort". What should be the strategy used to understand and calculate from code itself. Please help! Something like step count method. Need to understand how we can do asymptotic analysis from code itself. Please help.
int i,n=a.length,diff=n/2,interchange,temp;
while(diff>0) {
interchange=0;
for(i=0;i<n-diff;i++) {
if(a[i]>a[i+diff]) {
temp=a[i];
a[i]=a[i+diff];
a[i+diff]=temp;
interchange=1;
}
}
if(interchange==0) {
diff=diff/2;
}
}
Since the absolute lower bound on worst-case of a comparison-sorting algorithm is O(n log n), evidently one can't do any better. The same complexity holds here.
Worst-case time complexity:
1. Inner loop
Let's first start analyzing the inner loop:
for(i=0;i<n-diff;i++) {
if(a[i]>a[i+diff]) {
temp=a[i];
a[i]=a[i+diff];
a[i+diff]=temp;
interchange=1;
}
}
Since we don't know much (anything) about the structure of a on this level, it is definitely possible that the condition holds, and thus a swap occurs. A conservative analysis thus says that it is possible that interchange can be 0 or 1 at the end of the loop. We know however that if we will execute the loop a second time, with the same diff value.
As you comment yourself, the loop will be executed O(n-diff) times. Since all instructions inside the loop take constant time. The time complexity of the loop itself is O(n-diff) as well.
Now the question is how many times can interchange be 1 before it turns to 0. The maximum bound is that an item that was placed at the absolute right is the minimal element, and thus will keep "swapping" until it reaches the start of the list. So the inner loop itself is repeated at most: O(n/diff) times. As a result the computational effort of the loop is worst-case:
O(n^2/diff-n)=O(n^2/diff-n)
2. Outer loop with different diff
The outer loop relies on the value of diff. Starts with a value of n/2, given interchange equals 1 at the end of the loop, something we cannot prove will not be the case, a new iteration will be performed with diff being set to diff/2. This is repeated until diff < 1. This means diff will take all powers of 2 up till n/2:
1 2 4 8 ... n/2
Now we can make an analysis by summing:
log2 n
------
\
/ O(n^2/2^i-n) = O(n^2)
------
i = 0
where i represents *log2(diff) of a given iteration. If we work this out, we get O(n2) worst case time complexity.
Note (On the lower bound of worst-case comparison sort): One can proof no comparison sort algorithm exists with a worst-case time complexity of O(n log n).
This is because for a list with n items, there are n! possible orderings. For each ordering, there is a different way one needs to reorganize the list.
Since using a comparison can split the set of possible orderings into two equals parts at the best, it will require at least log2(n!) comparisons to find out which ordering we are talking about. The complexity of log2(n) can be calculated using the Stirling approximation:
n
/\
|
| log(x) dx = n log n - n = O(n log n)
\/
1
Best-case time complexity: in the best case, the list is evidently ordered. In that case the inner loop will never perform the if-then part. As a consequence, the interchange will not be set to 1 and therefore after executing the for loop one time. The outer loop will still be repeated O(log n) times, thus the time complexity is O(n log n).
Look at the loops and try to figure out how many times they execute. Start from the innermost ones.
In the given example (not the easiest one to begin with), the for loop (innermost) is excuted for i in range [0,n-diff], i.e. it is executed exactly n-diff times.
What is done inside that loop doesn't really matter as long as it takes "constant time", i.e. there is a finite number of atomic operations.
Now the outer loop is executed as long as diff>0. This behavior is complex because an iteration can decrease diff or not (it is decreased when no inverted pair was found).
Now you can say that diff will be decreased log(n) times (because it is halved until 0), and between every decrease the inner loop is run "a certain number of times".
An exercised eye will also recognize interleaved passes of bubblesort and conclude that this number of times will not exceed the number of elements involved, i.e. n-diff, but that's about all that can be said "at a glance".
Complete analysis of the algorithm is an horrible mess, as the array gets progressively better and better sorted, which will influence the number of inner loops.
It's only checking the for loop 1/3n times, so it's still technically linear I guess? However I don't really understand why it wouldn't be O(logn), because many times a code with O(logn) running time ends up checking around 1/3n. Does O(logn) always divide the options by 2 every time?
int a = 0;
for (int i = 0; i < n; i = i+3)
a = a+i;
your code has complexity O(n), O(n)/3 == a * O(n) == O(n)
With time-complexity analysis, constant factors do not matter. You could do 1,000,000 operations per loop, and it will still be O(n). Because the constant 1/3 doesn't matter, it's still O(n). If you have n at 1,000,000, then 1/3 of n would be much bigger than log n.
From the Wikipedia entry on Big-O notation:
Let k be a constant. Then:
O(kg) = O(g) if k is nonzero.
It is order of n O(n) and not O(logn).
It because the run time increases linearly with the increase in n
For more information take a look at this graph and hopefully you will understand why it is not logn
https://www.cs.auckland.ac.nz/software/AlgAnim/fig/log_graph.gif
The running Time is O(n) (in unit complexity measure).