What will be the execution time in Big O notation in fnB? - java

I got 2 functions and I need to find the execution time for both functions in Big O, however I am confuse on fnB
int fnA(int n){
int sum = 0;
for(int i=0; i<n; i++){
for(int j=n; i<j; j=j-2){
sum += i*j;
}
return sum;
}
I got O(n^2) for fnA
int fnB(int n) {
int sum =0;
for(int size = 1; size<n; size=2*size){
sum+=fnA(size);
}
return sum;
}
Since within the for loop in fnB, size increase exponentially. I am leaning fnB has an O(n^3). Am I correct, if not please correct me, thank you

fnA has a running time of O(n2).
However, fnB has a running time of O(n2logn), since it has log2n iterations, and each iteration takes O(n2) time (it actually takes O(size2), but since size < n, we can bound it with O(n2)).
A more detailed explanation:
fnA(n) has n iterations in the outer loop and at most n/2 iterations in the inner loop, which gives O(n2) upper bound. Since each iteration of fnB(n) calls fnA(size), it takes O(size2) == O(n2) (since size < n).
Now, the loop of fnB(n) assigns the following values to size : 20, 21, 22, ..., 2k where 2k <= n. Therefore the number of iterations is k <= log2n, and the upper bound of fnB is O(n2log2n).

Big-O notation can be used to represent time-complexity or space-complexity of an algorithm.
In your program, the function fnA can be thought of having a time complexity of O(n^2) because it has two nested for loops. However, due to the condition i<j which always evaluates to false, the inner for loop would never execute. So the more realistic time-complexity of your fnA function is O(n).
Your fnB function calls fnA in a single for loop. Therefore, its time complexity is O(n^2).

Related

Time complexity of an algorithm where the input is known?

Learning about algorithms and I am slightly puzzled when it comes to calculating Time Complexity. To my understanding, if the output of an algorithm does not depend on the input size, it takes constant time i.e. O(1). Whereas when it does depend on the input, it is known as linear time i.e. O(n).
However, how does the time complexity work out when we know the size of the input?
For example, I have the following code which prints out all the prime numbers between 1 and 100. In this scenario, I know the size of the input (100) so how would that translate to the Time Complexity?
public void findPrime(){
for(int i = 2; i <=100; i++){
boolean isPrime = true;
for(int j = 2; j < i; j++){
int x = i % j;
if(x == 0)
isPrime = false;
}
if (isPrime)
System.out.println(i);
}
}
In this case, would the complexity still be O(1) because the time is constant? Or would it be O(n) n being the i condition which affects the number of iterations for both for loops?
Am I also right in saying that the condition of i affects the algorithm the most in terms of run time? Greater the i, the longer the algorithm runs for?
Would appreciate any help.
The output is not dynamic and always the same (like the input), which is per definition a constant. The complexity of calculating that is constant, it's always the same. If the upper bound was not fixed, then the complexity wouldn't be constant.
To introduce a dynamic upper bound, we need to change the code and check out the complexities of the lines:
public void findPrime(int n){
for(int i = 2; i <= n; i++){ // sum from 2 to n
boolean isPrime = true; // 1
for(int j = 2; j < i; j++){ // sum from 2 to i - 1
int x = i % j; // 1
if(x == 0) // 1
isPrime = false; // 1
}
if (isPrime) // 1
System.out.println(i); // 1, see below
}
}
As the number i gets longer and longer, the complexity to print it is not constant. For simplicity, we say that printing out to System.out is constant.
Now when we know the complexities of the lines, we translate that into an equation and simplify it.
As the result is a polynomial, due to the properties of O notation, we can see that this function is O(n^2).
As the other answers have shown, you can also say it's O(n^2) by "locking at it". You need mathematical proofs only for more difficult cases (and to be sure).
If algorithm scalability depends on the input size, it's not always/necessarily only O(n2). It may be Qubic O(n3), Logarithmic O(log2(n)) or etc.
When algorithm doesn't depend on the input size, i.e. you have a constant amount of static operations which don't grow when your input grows - that algorithm is said to have a Constant Time Complexity which in asymptotic notation is O(1).
Usually, we want to measure Worst Cast Complexity for the algorithm, because that is what interests us for increasingly/sufficiently large inputs (for small inputs, mostly, it doesn't make any difference). So, the worst case is the case, when every possible iteration will execute/happen.
Now, pay attention to your double-for-loop. If you'll have your static range [2, 100] in your code, of course, if will always hit 3 as the first prime number, and every execution will have a Constant Time Complexity **O(1)**m but usually, we want to find prime numbers in some dynamically given range, and if that's the case, then, in the worst case, both loops may iterate over entire array, and as array grows - number of iterations, hence operations, will grow.
So, your code's worst-case time complexity is definitely O(n2).
Whereas when it does depend on the input, it is known as linear time i.e. O(n).
That's not true. When it depends on the input size, it is simply not constant.
It could be polynomial, meaning that it's complexity is represented as a polynom f(n).
Here, f(n) could be anything that is a polynom with parameter n - examples for this are:
f(n) = n - linear
f(n) = log(n) - logarithmic
f(n) = n*n - squared
...and so on
f(n) could also be an exponent, for example f(n) = 2^n, which represents an algorithm, which complexity grows very fast.
Time complexity denpend on what algorithm you use. You can calculate time complexity of an algorithm by using follow simple rules:
Primitive expression: 1
N primitive expressions: N
If you has 2 separate code blocks, 1st code block has time complexity is A, 2nd code block has time complexity is B, so total time complexity is A + B.
If you loop a code block N times, code block has time complexity is M, so total time complexity is N*M
If you use recursive function, you can calculate time complexity by using Master theorem: https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)
Big O notation is a mathematical notation (https://en.wikipedia.org/wiki/Big_O_notation) describes the bound of a function. Time complexity is usually a function of input size, so, we can use big O notation to describe bound of time complexity. Some simple rules:
constant = O(constant) = O(1)
n = O(n)
n^2 = O(n^2)
...
g(a*f(n)) = O(f(n)) with a is a constant.
O(f(n) + g(n)) = O(max(f(n), g(n))
...

Counting basic operations in function call

I've been given a simple pseudocode and told to determine the big O running time of the function myMethod() by counting the approximate number of operations it performs. The thing I am unsure about is that within the while loop of the function myMethod() there is a function call to doIt(), in which there is another while loop. I know that I need to include the operations within doIt(), however I am unsure if it should count as n or n^2 since it is a separate function, despite it being a while loop within a while loop.
I've written what I think the number of basic operations is for each line beside their respective lines, I would appreciate some guidance on this problem as I've looked around on the internet but not much luck regarding this specific issue.
static int doIt(int n)
{
count = 0 //1
j=1 //1
while j < n //n x n
{
count = count +1 //n x n
j=j+2 //n x n
}
return count //1
}
static int myMethod (int n)
{
i = 1 //1
while(i<n) //log n
{
dolt(i) //log n
i = ix2 //log n
}
return 1; //1
}
First, your doIt function is a basic while loop. I don't know what n*n is supposed to mean, but that loop is not O(n^2), its O(N) because it executes n/2 times- which we can write as 1/2 * n, and since we don't care about constants in terms of writing Big-O notation, we can say doIt has a Big-O complexity of O(N)
You correctly identified myMethod's loop to be log(N) time. Since the doIt function runs in O(N) time- the overall complexity of myMethod is log(N) for the complexity of the outer loop multiplied by the complexity of doIt- so log(N) * N which equals O(nlog(n))

What is the time complexity of an iteration through all possible sequences of an array

An algorithm that goes through all possible sequences of indexes inside an array.
Time complexity of a single loop and is linear and two nested loops is quadratic O(n^2). But what if another loop is nested and goes through all indexes separated between these two indexes? Does the time complexity rise to cubic O(n^3)? When N becomes very large it doesn't seem that there are enough iterations to consider the complexity cubic yet it seems to big to be quadratic O(n^2)
Here is the algorithm considering N = array length
for(int i=0; i < N; i++)
{
for(int j=i; j < N; j++)
{
for(int start=i; start <= j; start++)
{
//statement
}
}
}
Here is a simple visual of the iterations when N=7(which goes on until i=7):
And so on..
Should we consider the time complexity here quadratic, cubic or as a different size complexity?
For the basic
for (int i = 0; i < N; i++) {
for (int j = i; j < N; j++) {
// something
}
}
we execute something n * (n+1) / 2 times => O(n^2). As to why: it is the simplified form of
sum (sum 1 from y=x to n) from x=1 to n.
For your new case we have a similar formula:
sum (sum (sum 1 from z=x to y) from y=x to n) from x=1 to n. The result is n * (n + 1) * (n + 2) / 6 => O(n^3) => the time complexity is cubic.
The 1 in both formulas is where you enter the cost of something. This is in particular where you extend the formula further.
Note that all the indices may be off by one, I did not pay particular attention to < vs <=, etc.
Short answer, O(choose(N+k, N)) which is the same as O(choose(N+k, k)).
Here is the long answer for how to get there.
You have the basic question version correct. With k nested loops, your complexity is going to be O(N^k) as N goes to infinity. However as k and N both vary, the behavior is more complex.
Let's consider the opposite extreme. Suppose that N is fixed, and k varies.
If N is 0, your time is constant because the outermost loop fails on the first iteration.. If N = 1 then your time is O(k) because you go through all of the levels of nesting with only one choice and only have one choice every time. If N = 2 then something more interesting happens, you go through the nesting over and over again and it takes time O(k^N). And in general, with fixed N the time is O(k^N) where one factor of k is due to the time taken to traverse the nesting, and O(k^(N-1)) being taken by where your sequence advances. This is an unexpected symmetry!
Now what happens if k and N are both big? What is the time complexity of that? Well here is something to give you intuition.
Can we describe all of the times that we arrive at the innermost loop? Yes!
Consider k+N-1 slots With k of them being "entered one more loop" and N-1 of them being "we advanced the index by 1". I assert the following:
These correspond 1-1 to the sequence of decisions by which we reached the innermost loop. As can be seen by looking at which indexes are bigger than others, and by how much.
The "entered one more loop" entries at the end is work needed to get to the innermost loop for this iteration that did not lead to any other loop iterations.
If 1 < N we actually need one more that that in unique work to get to the end.
Now this looks like a mess, but there is a trick that simplifies it quite unexpectedly.
The trick is this. Suppose that we took one of those patterns and inserted one extra "we advanced the index by 1" somewhere in that final stretch of "entered one more loop" entries at the end. How many ways are there to do that? The answer is that we can insert that last entry in between any two spots in that last stretch, including beginning and end, and there is one more way to do that than there are entries. In other words, the number of ways to do that matches how much unique work there was getting to this iteration!
And what that means is that the total work is proportional to O(choose(N+k, N)) which is also O(choose(N+k, k)).
It is worth knowing that from the normal approximation to the binomial formula, if N = k then this turns out to be O(2^(N+k)/sqrt(N+k)) which indeed grows faster than polynomial. If you need a more general or precise approximation, you can use Stirling's approximation for the factorials in choose(N+k, N) = (N+k)! / ( N! k! ).

when would a statement inside a loop affect the complexity?

for example, if I have
for(int i = 1; i < 500; i++){
for (int j = 0; j < N; j++){
if (array[j] == someNumber && i == someNumber)
counter++;
}
}
if all numbers were the same in the array, hence counter would increase for each of them, would this make the complexity O(2N), O(N squared) or still just O(N), where it never became O(2N)? I hope the question makes sense. I find it hard to understand how statements might affect the complexity.
Since the statements inside your loop take constant time, they don't have any impact on the overall complexity of your algorithm. The complexity of the code you posted is just O(N), since your outer loop is executed a constant number of times, and your inner loop executes N times.
An example of when it would matter would be if you did something inside a loop like call a function that searches for a value in an array, or sorts a list. Since those operations depend on the size of the array (or list), you'd have to take them into account when calculating the complexity of your algorithm.

Complexity of Bubble Sort

I have seen at lot of places, the complexity for bubble sort is O(n2).
But how can that be so because the inner loop should always runs n-i times.
for (int i = 0; i < toSort.length -1; i++) {
for (int j = 0; j < toSort.length - 1 - i; j++) {
if(toSort[j] > toSort[j+1]){
int swap = toSort[j+1];
toSort[j + 1] = toSort[j];
toSort[j] = swap;
}
}
}
And what is the "average" value of n-i ? n/2
So it runs in O(n*n/2) which is considered as O(n2)
There are different types of time complexity - you are using big O notation so that means all cases of this function will be at least this time complexity.
As it approaches infinity this can be basically n^2 time complexity worst case scenario. Time complexity is not an exact art but more of a ballpark for what sort of speed you can expect for this class of algorithm and hence you are trying to be too exact.
For example the theoretical time complexity might very well be n^2 even though it should in theory be n*n-1 because of whatever unforeseen processing overhead might be performed.
Since outer loop runs n times and for each iteration inner loop runs (n-i) times , the total number of operations can be calculated as
n*(n-i) = O(n2).
It's O(n^2),because length * length.

Categories