I've written a piece of java code that when given an array (arrayX), works out the prefix averages of that array and outputs them in another array (arrayA). I am supposed to count the primitive operations and calculate the Big-O notation (which i'm guessing is the overall number of calculations). I have included the java code and what I believe to be the number of primitive operations next to each line, but I am unsure whether I have counted them correctly. Thanks in advance and sorry for my inexperience, i'm finding this hard to grasp :)
double [] arrayA = new double [arrayX.length]; *(3 Operations)*
for (int i = 0; i < arrayX.length; ++i) *(4n + 2 Operations)*
{
double sum = arrayX[i]; *(3n Operations)*
for (int j = 0; j < i; ++j) *(4n^2 + 2n Operations)*
{
sum = sum + arrayX[j]; *(5n^2 Operations)*
}
arrayA[i] = sum/(i+1); *(6n Operations)*
}
return arrayA; *(1 Operation)*
Total number of operations: 9n^2 +15n + 6
I don't think there's any standard definition of "what constitutes a primitive operation". I'm assuming this is a class assignment; if your instructor has given you detailed information about what operations to count as primitive operations, then go by that, otherwise I don't think he can fault you for any way you count them, as long as you have a reasonable explanation.
Regarding the inner loop:
for (int j = 0; j < i; ++j)
please note that the total number of times the loop executes is not n2, but rather 0+1+2+...+(n-1) = n(n-1)/2. So your calculations are probably incorrect there.
Big-O notation is not really "the total number of calculations"; roughly speaking, it's a way of estimating how the number of calculations grows when n grows, by saying that the number of calculations is roughly proportional to some function of n. If the number of calculations is Kn2 for any constant K, we say that the number of calculations is O(n2) regardless of what the constant K is. Therefore, it doesn't completely matter what you count as primitive operations. You might get 9n2, someone else who counts different operations may get 7n2 or 3n2, but it doesn't matter--it's all O(n2). And the lower-degree terms (15n+6) don't count at all, since they grow more slowly than the Kn2 term. Thus they aren't relevant to determining the appropriate big-O formula.
Related
An algorithm that goes through all possible sequences of indexes inside an array.
Time complexity of a single loop and is linear and two nested loops is quadratic O(n^2). But what if another loop is nested and goes through all indexes separated between these two indexes? Does the time complexity rise to cubic O(n^3)? When N becomes very large it doesn't seem that there are enough iterations to consider the complexity cubic yet it seems to big to be quadratic O(n^2)
Here is the algorithm considering N = array length
for(int i=0; i < N; i++)
{
for(int j=i; j < N; j++)
{
for(int start=i; start <= j; start++)
{
//statement
}
}
}
Here is a simple visual of the iterations when N=7(which goes on until i=7):
And so on..
Should we consider the time complexity here quadratic, cubic or as a different size complexity?
For the basic
for (int i = 0; i < N; i++) {
for (int j = i; j < N; j++) {
// something
}
}
we execute something n * (n+1) / 2 times => O(n^2). As to why: it is the simplified form of
sum (sum 1 from y=x to n) from x=1 to n.
For your new case we have a similar formula:
sum (sum (sum 1 from z=x to y) from y=x to n) from x=1 to n. The result is n * (n + 1) * (n + 2) / 6 => O(n^3) => the time complexity is cubic.
The 1 in both formulas is where you enter the cost of something. This is in particular where you extend the formula further.
Note that all the indices may be off by one, I did not pay particular attention to < vs <=, etc.
Short answer, O(choose(N+k, N)) which is the same as O(choose(N+k, k)).
Here is the long answer for how to get there.
You have the basic question version correct. With k nested loops, your complexity is going to be O(N^k) as N goes to infinity. However as k and N both vary, the behavior is more complex.
Let's consider the opposite extreme. Suppose that N is fixed, and k varies.
If N is 0, your time is constant because the outermost loop fails on the first iteration.. If N = 1 then your time is O(k) because you go through all of the levels of nesting with only one choice and only have one choice every time. If N = 2 then something more interesting happens, you go through the nesting over and over again and it takes time O(k^N). And in general, with fixed N the time is O(k^N) where one factor of k is due to the time taken to traverse the nesting, and O(k^(N-1)) being taken by where your sequence advances. This is an unexpected symmetry!
Now what happens if k and N are both big? What is the time complexity of that? Well here is something to give you intuition.
Can we describe all of the times that we arrive at the innermost loop? Yes!
Consider k+N-1 slots With k of them being "entered one more loop" and N-1 of them being "we advanced the index by 1". I assert the following:
These correspond 1-1 to the sequence of decisions by which we reached the innermost loop. As can be seen by looking at which indexes are bigger than others, and by how much.
The "entered one more loop" entries at the end is work needed to get to the innermost loop for this iteration that did not lead to any other loop iterations.
If 1 < N we actually need one more that that in unique work to get to the end.
Now this looks like a mess, but there is a trick that simplifies it quite unexpectedly.
The trick is this. Suppose that we took one of those patterns and inserted one extra "we advanced the index by 1" somewhere in that final stretch of "entered one more loop" entries at the end. How many ways are there to do that? The answer is that we can insert that last entry in between any two spots in that last stretch, including beginning and end, and there is one more way to do that than there are entries. In other words, the number of ways to do that matches how much unique work there was getting to this iteration!
And what that means is that the total work is proportional to O(choose(N+k, N)) which is also O(choose(N+k, k)).
It is worth knowing that from the normal approximation to the binomial formula, if N = k then this turns out to be O(2^(N+k)/sqrt(N+k)) which indeed grows faster than polynomial. If you need a more general or precise approximation, you can use Stirling's approximation for the factorials in choose(N+k, N) = (N+k)! / ( N! k! ).
I've searched about Big O notation for some time, and I learned that when calculating we have to assume that every sentence that doesn't depend on the size of the input data takes a constant C number computational steps.
My program goes like this.
It "always" takes in a "48 bit" random seed and produces an output, and the actual moves that happen within the process of producing the output varies according to the seed value itself, not by the size because it's fixed.
I for looped this process for n times, in order to get n outputs.
Does this mean the Big O notation is O(n) for my program? Or am I completely misunderstanding something?
So, the number of loops I just write in the code. For example, if I set it to 1000, it takes in 1000 input seeds and produces 1000 outputs. The process within the loop, so the number of for loops or the number of if - else or switch statements inside the bigger loop are fixed. The only thing that changes inside the bigger loop is which "if statement" to choose depending on the value of the seed.
The complexity is always expressed as relative to something.
Since your input length is constant, it doesn't make much sense to express the complexity relative to that. It may have O(n) complexity relative to number of loops, but again, since the value is hard-coded this information will have little value to the user.
Perhaps the most useful in your case is information about what the complexity is relative to the input value. If that is constant, you can say that your program performs in constant time, because no matter what the (valid) user input will be, the time it will take your program to produce the output will roughly be the same.
int f(int[] a, int[] b, int c) {
int n = a.length;
int m = b.length;
int y = 0;
for (int i = 0; i < n; ++i) {
y += a[i];
for (int j = 0; j < m; ++j) {
y -= b[j];
}
}
for (int k = 0; k < c; ++k) {
y += 13;
}
return y;
}
So the complexity is O(n.m)+O(c) counting steps (loop, recursive calls).
for (long k = seed; k > 1; k /= 2) {
...;
}
would give O(²log seed), at most 48.
Strictly said, O(n) means that this n is algorithm parameter/input of some kind or derived from it. It can be length of input, or even output, but it's derived from algorithm parameter.
So, this O(n) has a meaning when we talk about your procedure "loop this process for n times", if you automated this procedure with some script. Algorithm itself still works in O(1) time. If you don't automate procedure, just forget about Big O - manual action makes it irrelevant.
I'm trying to convert a decimal into binary number using iterative process. How can I make this have a space complexity of O(1) instead of O(n)?
int i = 0;
int j;
int bin[] = new int[n]; //n here is my paramater int n
while(n > 0) {
bin[i] = n % 2;
n /= 2;
i++;
}
//I'm reversing the order of index i with variable j to get right order (e.g. 26 has 11010, instead of 01011)
for(j = i -1; j >= 0; j--) {
System.out.print(bin[j]);
}
First, you don't need place for n bits if the value itself is n. You just need log2(n)+1. It won't give you wrong results to use n bits, but for big values of n, the memory available to your Java process might be not enough.
And, about O(1)... maybe not really what you were thinking, but:
Javas int has a specific fixed value range, which leads to the guarantee that a (positive) int value needs max 31 bit (if you have negative numbers too, storing the sign somewhere is necessary, that's bit 32).
With that information, strictly speaking, you can get O(1) just by rewriting your loops so that they loop exactly 31 times. Then, for each value of n, your code has exactly the same amount of steps, and that is O(1) per definition.
Going the bit fiddling route won't help here. There are some useful shortcuts if your values fulfil certain conditions, but if you want your code to work with any int value, the normal loop as you have here is likely the best you can get.
(Of yourse, CPU intrinsics may help, but not for Java...)
This algorithm reverses an array of N integers. I believe this algorithm is O(N) because for each loop iteration, the four lines of code are executed once thus completing the job in 4N time.
public static void reverseTheNumbers(int[] list) {
for (int i = 0; i < list.length / 2; i++) {
int j = list.length - 1 - i;
int temp = list[i];
list[i] = list[j];
list[j] = temp;
}
}
There isn't such a thing as 4N time. The algorithm is linear because as you increase the size of the input the runtime of the algorithm increases proportionally. In other words if you doubled the size of list you would expect the algorithm to take twice as long.
It doesn't matter how many operations you do inside your loop - as long as they are each constant time (relative to the input) the runtime of the loop is determined simply by the number of iterations.
Put another way, these four statements are - all together - an O(1) operation.
int j = list.length - 1 - i;
int temp = list[i];
list[i] = list[j];
list[j] = temp;
There's nothing significant about the fact that this sequence of steps is expressed in four statements in Java syntax - experimenting with javap suggests these four lines compiles into ~20 bytecode commands, and who knows how many processor instructions that bytecode gets converted into. The good news is Big-O notation works the same regardless of the particular syntax - a sequence of operations is O(1) or constant time if its execution time is the same regardless of the input.
Therefore you're doing an O(1) operation N times; aka O(N).
Yes, you are correct. The number of operations is linearly dependent on the size of the array (N), making it an O(N) algorithm.
Yes, the complexity of the algorithm is O(n).
However, the exact "time" (because there are no constant factors in asymptotic complexity, check comment below) is not 4 times the size of the array, we could say it is 1/2*(c1+c2+c3+c3) times the size of the array, where 1/2 corresponds to each loop iteration and each c corresponds to the time needed for each operation inside theloop.
It would be 4 times the size of the array, if the algorithm was iterating the whole array 4 times.
We have the following Java method:
static void comb(int[] a, int i, int max) {
if(i < 0) {
for(int h = 0; h < a.length; h++)
System.out.print((char)(’a’+a[h]));
System.out.print("\n");
return;
}
for(int v = max; v >= i; v--) {
a[i] = v;
comb(a, i-1, v-1);
}
}
static void comb(int[] a, int n) { // a.length <= n
comb(a, a.length-1, n - 1);
return;
}
I have to determine an asymptotic estimate of the cost of the algorithm comb(int[],int) in function of the size of the input.
Since I'm just starting out with this type of exercises, I can not understand if in this case for input size means the size of the array a or some other method parameter.
Once identified the input size, how to proceed to determine the cost of having a multiple recursion?
Please, you can tell me the recurrence equation which determines the cost?
To determine the complexity of this algorithm you have to understand on which "work" you spend most of the time. Different kind of algorithm may depend different aspects of its parameters like input size, input type, input order, and so on. This one depends on array size and n.
Operations like System.out.print, (char), 'a' + a [h], a.length, h++ and so on are constant time operations and mostly depends on processor commands you will get after compilation and from a processor on which you will execute those instructions. But eventually they can be summed to constant say C. This constant will not depend on the algorithm and input size so you can safely omit it from estimation.
This algorithm has linearly dependent on the input size because it cycles, it's input array (with a cycle from h = 0 to last array element). And because n can be equal to array size (a.length = n - this is the worst case for this algorithm because it forces it execute recursion "array size" times) we should consider this input case in our estimation. And then we get another cycle with recursion which will execute method comb other n times.
So in the worst case we will get a O(n*n*C) number of execution steps for significantly large input size constant C will become insignificant so you can omit it from estimation. Thus final estimation will be O(n^2).
The original method being called is comb(int[] a, int n), and you know that a.length <= n. This means you can bound the running time of the method with a function of n, but you should think whether you can compute a better bound with a function of both n and a.length.
For example, if the method executes a.length * n steps and each step takes a constant amount of time, you can say that the method takes O(n^2) time, but O(a.length * n) would be more accurate (especially if n is much larger than a.length.
You should analyze how many times the method is called recursively, and how many operations occur in each call.
Basically for a given size of input array, how many steps does it take to compute the answer? If you double the input size, what happens to the number of steps? The key is to examine your loops and work out how many times they get executed.