I'm learning about time complexity and I understand the gist of it, but there's one thing that's really confusing me, it's about understanding the time complexity of a while loop in a for loop.
This is the code I am analyzing:
sum := 0
for i := 1 to n
j := 1
while j ≤ i
sum += j
j*=5
end
end
I've given a shot at this and I made this table, breaking it down:
CODE:
COST:
# OF TIMES:
TIME COMPLEXITY:
sum := 0
1
1
1
for i := 1 to n
int i = 1 (1)
1
2n+1
i<=n (1)
n+1
i++ (1)
n
j := 1
1
n
n
while j ≤ i
j ≤ i (1)
?
?
sum += j
1
?
?
j*=5
1
?
?
end
0
1
0
end
0
1
0
I understand the how the time complexity works for the for loop, but when I get to the while loop I'm confused.
I know that assignments have cost of 1 and comparisons have a cost of 1.
If the while loop was written as:
sum:=0
j:=1
while j<=n
sum+=j
j*=5
end
Since it's moving in increments of 5: (j*=5), its time complexity would be: log base5 n.
But the while loop in the code goes j<=i, which is throwing me off.
I someone could help me with cost/# of times the individual lines of the while loop, I would really appreciate it.
fyi: this isn't an assignment for school or anything like that, I'm genuinely trying to understand this concept for myself.
If the table above doesn't format correctly, here is a ss of it
Your loop is equivalent to the sum of log_5(i) for all i from 1 to n. The logarithm base doesn't matter, it's a constant; so i'll just say log from now on. This summation is asymptotically equal to n log n. Think about it this way: If you look at the graph of log, it's really, really flat, so most of the higher values in your summation look the same to the log function. This is only an intuition hint, not a formal proof, but it's sufficient. If you need a formal proof, check out this post
Related
I'm given code for an algorithm as such:
1 sum =0;
2 for(i=0; i<n; i++)
3 for(j=1; j<= n; j*=3)
4 sum++;
I'm told this algorithm runs in O(nlogn) where log is in base 3.
So I get that the second line runs n times, and since line 3 is independent of line 2 I would have to multiply the two to get the Big O, however, I'm not sure how the answer is nlogn(log in base 3), is their a guaranteed way to figure this out everytime? It seems like with nested loops, there's a different case that can occur each time.
What you have been told is correct. The algorithm runs in O(nlog(n)). The third line: for(j=1; j<= n; j*=3) runs in O(log3(n)) because you multiply by 3 each time. To see it more clearly solve the problem: how many times you need to multiply 1 by 3 to get n. Or 3^x = n. The solution is x = log3(n)
Yes. Algo runs in nlog(n) times where log is base 3.
The easiest way to calculate complexity is to calculate number of operations done.
The outer for loop runs n times. And let's calculate how many times each inner for loop runs for each n. So for n=1,2, inner for loop runs 1 times. For n=3,4,5,6,7,8 inner for loop runs 2 times. And so on...
That means that the inner for loop runs in logarithmic time (log(n)) for each n.
So n*log(n) will be total complexity.
On the second loop you have j *= 3, that means you can divide n by 3 log3(n) times. That gives you the O(logn) complexity.
Since your first loop has a O(n) you have O(nlogn) in the end
I'm given a pseodocode statement as such:
function testFunc(B)
for j=1 to B.length-1
for i=1 to B.length-j
if(B[i-1] > B[i]
swap B[i-1] and B[i]
And I'm told to show that this algorithm runs in Big o O(n^2) time.
So I know that the first for loop runs n times, because I believe it's inclusive. I'm not sure about the rest of the lines though, would the second for loop run n-2 times? Any help would be much appreciated.
The inner loop runs a decreasing number of times. Look at a concrete example. If B.length were 10, then the contents of the inner loop would be executed 10 times, then 9 times, and so on, down to 1 time.
Using Gauss' equation of:
n(n + 1) / 2
you can see that the inner code would be executed 55 times in that example. (10(10 + 1)/2 = 55)
So, it follows that for n times, it would run n(n + 1) / 2 times. This is equivalent to:
1/2 n^2 + 1/2 n
In terms of Big-Oh, the co-efficients and the smaller values of n are ignored, so this is equivalent to O(n^2).
If N = B.length, then outer loop runs N-1 times, and inner loop runs (N-1)+...+3+2+1 times, for a total of (N-1) * (N/2) = N^2/2 - N/2 times, which means O(n^2).
Let's say that B.length is 5 times. The outer loop will thus run 4 times. On the first iteration through the outer loop, the inner loop will run 4 times; on the second iteration the inner loop will run 3 times; 2 times for the third iteration; and 1 time for the fourth.
Let's lay the results out geometrically:
AAAA
AAA
AA
A
Each A represents getting to the conditional/swap inside the nested loop, and you want to know, in general, how many A's there are.
An easy way to count them is to double the triangle shape to produce a rectangle:
AAAAB
AAABB
AABBB
ABBBB
and you can quickly see that for a triangle whose side is length N, there are N*(N-1)/2 A's because they are half of the N*(N-1) rectangle is made up of A's. Carrying out the multiplication, and ignoring the scale factor of 1/2 (because big-O doesn't care about constants), we see that there are O(N^2) A's.
So this is a 2 part question.
I have some code that asks for the time complexity, and it consists of 3 for loops (nested):
public void use_space(int n)
for(int i=0;i<N;i++)
for(int j=0;j<N;j++)
for(int k=0;k<N;k++)
//and at the end of the code, it makes a recursive call to the function
use_space(n/2);
use_space(n/2);
So what I derived for this time complexity recurrence was: T(n) = 2T(n/2) + n^3. The reason I got that was because there were 2 recursive calls to the function each consisting of n/2 time and the nested for loops take n^3 time (3 loops).
IS this correct?
And then for the space complexity, I got S(n) = S(n/2) + n
Hope someone can clarify and tell me if this is wrong/explain. All help would be greatly appreciated.
Your timecomplexity is correct. You can use masters theorem to simplify it to Theta(n^3)
But your space complexity seems to be (a little) incorrect.
For every call of use_space(n) you have to save three numbers i, j and k. This numbers are at most of the size of n (if n == N, I think you mixed them up), so they need log n bits to be saved. Due to freeing the space after finishing use_space you only have to save one additional call use_space(n/2).
So you get the space complexity S(n) = S(n/2) + 3 log n.
Improvement: You could free i, j and k after ending the three loops, so you do not need 3 log n AND S(n/2) (at the same time), but first 3 log n and after this S(n/2), which will be frist 3 log (n/2) and then S(n/4) (and so on), so you only need the maximum of the space used at the same time, which would be 3 log n.
I have a function in java which is written with a recursive algorithm that needs to be written in Iterative form. The thing is that I dont know where to start in wrapping my mind around a new algorithm for this. This is an assignment I am working on.
Consider the following computational problem:
Suppose you work as a consultant and you have a sequence of n potential consulting jobs that pay A[0],A[1],..,A[n-1] dollars, respectively (so job 0 pays A[0] dollars, job 1 pays A[1] dollars, etc.).
Also, job i starts on day i (i = 0 ; : : : ; n 1 ).
However, each job requires 2 days, so you cannot perform any two consecutive jobs. The goal is to determine the maximum amount of money, denoted by F(n) ; you can earn from a valid job schedule selected from the n jobs A[0] through A[n-1]
:
As an example, consider the following input array:
0 1 2 3 4 5
A 5 6 8 6 2 4
An optimal schedule is to do jobs 0 ; 2 ; and 5 ; for which the amount of money
earned, F(6) = A[0] + A [2] + A [5] = 17 ; is as large as possible. Notice that this
is a valid schedule, as no two consecutive jobs are included.
My function for the recursive version that solves this is as follows:
public static int jobScheduleRecursive(int[] A, int n)
{
n = A.length;
if(n == 0){return 0;}
else if(n == 1){return A[0];}
else if(n >= 2){return max(jobScheduleRecursive(A, (n-1)), (A[n-1])
+ jobScheduleRecursive(A, n-2));}
else return -1;
}
To sum up, I have to come up with an iterative algorithm that does this job. The only problem is that i have no idea how to proceed. I would appreciate any advice to lead me in the right direction.
sometimes, iterative solutions for a known recursive solution to problems isn't straight forward as the recursive solution. The easiest way to achieve iterative solution is to use - Dynamic Programming basically what you want is to create a temporary array that holds the solution to all sub-problems in the way. to achieve that, create a dynamically allocated array in the size of your input. and if for instance your recursive function is int foo(int a)
fill the array with the solutions to 1..n, where n is the input for your original problem.
change the algorithm, that instead of calling itself recursivelly, it will check if the solution for the sub-problem allready exists in the array, if not, it fills it up. that way a sub-problem won't compute numerus times, but only once.
I have an algorithm, and I want to figure it what it does. I'm sure some of you can just look at this and tell me what it does, but I've been looking at it for half an hour and I'm still not sure. It just gets messy when I try to play with it. What are your techniques for breaking down an algoritm like this? How do I analyze stuff like this and know whats going on?
My guess is its sorting the numbers from smallest to biggest, but I'm not too sure.
1. mystery(a1 , a2 , . . . an : array of real numbers)
2. k = 1
3. bk = a1
4. for i = 2 to n
5. c = 0
6. for j = 1 to i − 1
7. c = aj + c
8. if (ai ≥ c)
9. k = k + 1
10. bk = ai
11. return b1 , b2 , . . . , bk
Here's an equivalent I tried to write in Java, but I'm not sure if I translated properly:
public int[] foo(int[] a) {
int k=1;
int nSize=10;
int[] b=new int[nSize];
b[k]=a[1];
for (int i=2;i<a.length;){
int c=0;
for (int j=1;j<i-1;)
c=a[j]+c;
if (a[i]>=c){
k=k+1;
b[k]=a[i];
Google never ceases to amaze, due on the 29th I take it? ;)
A Java translation is a good idea, once operational you'll be able to step through it to see exactly what the algorithm does if you're having problems visualizing it.
A few pointers: the psuedo code has arrays indexed 1 through n, Java's arrays are indexed 0 through length - 1. Your loops need to be modified to suit this. Also you've left the increments off your loops - i++, j++.
Making b magic constant sized isn't a good idea either - looking at the pseudo code we can see it's written to at most n - 1 times, so that would be a good starting point for its size. You can resize it to fit at the end.
Final tip, the algorithm's O(n2) timed. This is easy to determine - outer for loop runs n times, inner for loop n / 2 times, for total running time of (n * (n / 2)). The n * n dominates, which is what Big O is concerned with, making this an O(n2) algorithm.
The easiest way is to take a sample but small set of numbers and work it on paper. In your case let's take number a[] = {3,6,1,19,2}. Now we need to step through your algorithm:
Initialization:
i = 2
b[1] = 3
After Iteration 1:
i = 3
b[2] = 6
After Iteration 2:
i = 4
b[2] = 6
After Iteration 3:
i = 5
b[3] = 19
After Iteration 4:
i = 6
b[3] = 19
Result b[] = {3,6,19}
You probably can guess what it is doing.
Your code is pretty close to the pseudo code, but these are a few errors:
Your for loops are missing the increment rules: i++, j++
Java arrays are 0 based, not 1 based, so you should initialize k=0, a[0], i=1, e.t.c.
Also, this isn't sorting, more of a filtering - you get some of the elements, but in the same order.
How do I analyze stuff like this and know whats going on?
The basic technique for something like this is to hand execute it with a pencil and paper.
A more advanced technique is to decompose the code into parts, figure out what the parts do and then mentally reassemble it. (The trick is picking the boundaries for decomposing. That takes practice.)
Once you get better at it, you will start to be able to "read" the pseudo-code ... though this example is probably a bit too gnarly for most coders to handle like that.
When converting to java, be careful with array indexes, as this pseudocode seems to imply a 1-based index.
From static analysis:
a is the input and doesn't change
b is the output
k appears to be a pointer to an element of b, that will only increment in certain circumstances (so we can think of k = k+1 as meaning "the next time we write to b, we're going to write to the next element")
what does the loop in lines 6-7 accomplish? So what's the value of c?
using the previous answer then, when is line 8 true?
since lines 9-10 are only executed when line 8 is true, what does that say about the elements in the output?
Then you can start to sanity check your answer:
will all the elements of the output be set?
try running through the psuedocode with an input like [1,2,3,4] -- what would you expect the output to be?
def funct(*a)
sum = 0
a.select {|el| (el >= sum).tap { sum += el }}
end
Srsly? Who invents those homework questions?
By the way: since this is doing both a scan and a filter at the same time, and the filter depends on the scan, which language has the necessary features to express this succinctly in such a way that the sequence is only traversed once?