Time Complexity of Fibonacci Algorithm [duplicate] - java

This question already has answers here:
Computational complexity of Fibonacci Sequence
(12 answers)
Closed 6 years ago.
So, i've got a recursive method in Java for getting the 'n'th fibonacci number - The only question i have, is: what's the time complexity? I think it's O(2^n), but i may be mistaken? (I know that iterative is way better, but it's an exercise)
public int fibonacciRecursive(int n)
{
if(n == 1 || n == 2) return 1;
else return fibonacciRecursive(n-2) + fibonacciRecursive(n-1);
}

Your recursive code has exponential runtime. But I don't think the base is 2, but probably the golden ratio (about 1.62). But of course O(1.62^n) is automatically O(2^n) too.
The runtime can be calculated recursively:
t(1)=1
t(2)=1
t(n)=t(n-1)+t(n-2)+1
This is very similar to the recursive definition of the fibonacci numbers themselves. The +1 in the recursive equation is probably irrelevant for large n. S I believe that it grows approximately as fast as the fibo numbers, and those grow exponentially with the golden ratio as base.
You can speed it up using memoization, i.e. caching already calculated results. Then it has O(n) runtime just like the iterative version.
Your iterative code has a runtime of O(n)
You have a simple loop with O(n) steps and constant time for each iteration.

You can use this
to calculate Fn in O(log n)

Each function call does exactly one addition, or returns 1. The base cases only return the value one, so the total number of additions is fib(n)-1. The total number of function calls is therefore 2*fib(n)-1, so the time complexity is Θ(fib(N)) = Θ(phi^N), which is bounded by O(2^N).

O(2^n)? I see only O(n) here.
I wonder why you'd continue to calculate and re-calculate these? Wouldn't caching the ones you have be a good idea, as long as the memory requirements didn't become too odious?
Since they aren't changing, I'd generate a table and do lookups if speed mattered to me.

It's easy to see (and to prove by induction) that the total number of calls to fibonacciRecursive is exactly equal to the final value returned. That is indeed exponential in the input number.

Related

Worst case time complexity of Math.sqrt in java

We have a test exercise where you need to find out whether a given N number is a square of another number or no, with the smallest time complexity.
I wrote:
public static boolean what2(int n) {
double newN = (double)n;
double x = Math.sqrt(newN);
int y = (int)x;
if (y * y == n)
return false;
else
return true;
}
I looked online and specifically on SO to try and find the complexity of sqrt but couldn't find it. This SO post is for C# and says its O(1), and this Java post says its O(1) but could potentially iterate over all doubles.
I'm trying to understand the worst time complexity of this method. All other operations are O(1) so this is the only factor.
Would appreciate any feedback!
Using the floating point conversion is OK because java's int type is 32 bits and java's double type is the IEEE 64 bit format that can represent all values of 32 bit integers exactly.
If you were to implement your function for long, you would need to be more careful because many large long values are not represented exactly as doubles, so taking the square root and converting it to an integer type might not yield the actual square root.
All operations in your implementation execute in constant time, so the complexity of your solution is indeed O(1).
If I understood the question correctly, the Java instruction can be converted by just-in-time-compilation to use the native fsqrt instruction (however I don't know whether this is actually the case), which, according to this table, uses a bounded number of processor cycles, which means that the complexity would be O(1).
java's Math.sqrt actually delegates sqrt to StrictMath.java source code one of its implementations can be found here, by looking at sqrt function, it looks like the complexity is constant time. Look at while(r != 0) loop inside.

Is time complexity of an algorithm calculated only based on number of times loop excecutes?

I have a big doubt in calculating time complexity. Is it calculated based on number of times loop executes? My question stems from the situation below.
I have a class A, which has a String attribute.
class A{
String name;
}
Now, I have a list of class A instances. This list has different names in it. I need to check whether the name "Pavan" exist in any of the objects in the list.
Scenario 1:
Here the for loop executes listA.size times, which can be said as O(n)
public boolean checkName(List<String> listA, String inputName){
for(String a : listA){
if(a.name.equals(inputName)){
return true;
}
}
return false;
}
Scenario 2:
Here the for loop executes listA.size/2 + 1 times.
public boolean checkName(List<String> listA, String inputName){
int length = listA.size/2
length = length%2==0 ? length : length + 1
for(int i=0; i < length ; i++){
if(listA[i].name.equals(inputName) || listA[listA.size - i - 1].name.equals(inputName)){
return true;
}
}
return false;
}
I minimized the number of times for loop executes, but I increased the complexity of the logic.
Can we say this is O(n/2)? If so, can you please explain me?
First note that in Big-O notation there is nothing such as O(n/2) as 1/2 is a constant factor which is ignored in this notation. The complexity would remain as O(n). So by modifying your code you haven't changed anything regarding complexity.
In general estimating the number of times a loop is executed with respect to input size and the operation that actually is associated with a cost in time is the way to get to the complexity class of the algorithm.
The operation that is producing cost in your method is String.equals, which by looking at it's implementation, is producing cost by comparing characters.
In your example the input size is not strictly equal to the size of the list. It also depends on how large the strings contained in that list are and how large the inputName is.
So let's say the largest string in the list is m1 characters and the inputName is m2 characters in length. So for your original checkName method the complexity would be O(n*min(m1,m2)) because of String.equals comparing at most all characters of a string.
For most applications the term min(m1,m2) doesn't matter as either one of the compared strings is stored in a fixed size database column for example and therefore this expression is a constant, which is, as said above, ignored.
No. In big O expression, all constant values are ignored.
We only care about n, such as O(n^2), O(logn).
Time and space complexity is calculated based on the number or operations executed, respectively the number the units of memory used.
Regarding time complexity: all the operations are taken into account and numbered. Because it's hard to compare say O(2*n^2+5*n+3) with O(3*n^2-3*n+1), equivalence classes are used. That means that for very large values of n, the two previous example will have a roughly similar value (more exactly said: they have a similar rate of grouth). Therefor, you reduce the expression to it's most basic form, saying that the two example are in the same equivalence class as O(n^2). Similarly, O(n) and O(n/2) are in the same class and therefor both are in O(n).
Due to what I said before, you can ignore most constant operations (such as .size(), .lenth() on collections, assignments, etc) as they don't really count in the end. Therefor, you're left with loop operations and sometimes complex computations (that somewhere lower on the stack use loops themselves).
To better get an understanding on the 3 classes of complexity, try reading articles on the subject, such as: http://discrete.gr/complexity/
Time complexity is a measure for theoretical time it will take for an operation to be executed.
While normally any improvement in the time required is significant in time complexity we are interested in the order of magnitude. The former means
If an operation for N objects requires N time intervals then it has complexity O(N).
If an operation for N objects requires N/2 it's complexity is still O(N) though.
The above paradox is explained if you get to calculate the operation for large N then there is no big difference in the /2 part as for the N part. If complexity is O(N^2) then O(N) is negligible for large N so that's why we are interested in order of magnitude.
In other words any constant is thrown away when calculating complexity.
As for the question if
Is it calculated based on number of times loop executes ?
well it depends on what a loop contains. But if only basic operation are executed inside a loop then yes. To give an example if you have a loop inside which an eigenanaluysis is executed in each run, which has complexity O(N^3) you cannot say that your complexity is simply O(N).
Complexity of an algorithm is measured based on the response made on the input size in terms of processing time or space requirement. I think you are missing the fact that the notations used to express the complexity are asymptotic notations.
As per your question, you have reduced the loop execution count, but not the linear relation with the input size.

Complexity in cases excellent, average and bad

I created this method in java that indicates whether an integer array is sorted or not. What is its complexity? I think if good is O(1) in the worst case is O(n) in the average case?
static boolean order(int[] a){
for(int i=0;i<a.length-1;i++){
if(a[i]>a[i+1]) return false;
}
return true;
}
You didn't tell anything about your input. So suppose it's totally random. So for any 2 neighbour pairs we have 50% chance that they are ordered. It means that we have probability 1 of making 1 step, 0.5 for 2 steps, 0.25 for 3 steps and generally 2^(-k) for k steps. Let's calculate expected number of steps:
I don't know how to calculate sum of this series so I used wolfram alpha and got answer: 2, so it's a constant.
So as I understand average case for random input is O(1).
I'm not sure it is correct way to calculate average complexity but seems fine to me.
Complexity is usually quoted in worst case, which in your case is O(n).

Big O for a finite, fixed size set of possible values

This question triggered some confusion and many comments about whether the algorithms proposed in the various answers are O(1) or O(n).
Let's use a simple example to illustrate the two points of view:
we want to find a long x such that a * x + b = 0, where a and b are known, non-null longs.
An obvious O(1) algo is x = - b / a
A much slower algo would consist in testing every possible long value, which would be about 2^63 times slower on average.
Is the second algorithm O(1) or O(n)?
The arguments presented in the linked questions are:
it is O(n) because in the worst case you need to loop over all possible long values
it is O(1) because its complexity is of the form c x O(1), where c = 2^64 is a constant.
Although I understand the argument to say that it is O(1), it feels counter-intuitive.
ps: I added java as the original question is in Java, but this question is language-agnostic.
The complexity is only relevant if there is a variable N. So, the question makes no sense as is. If the question was:
A much slower algo would consist in testing every possible value in a range of N values, which would be about N times slower on average.
Is the second algorithm O(1) or O(N)?
Then the answer would be: this algorithm is O(N).
Big O describes how an algorithm's performance will scale as the input size n scales. In other words as you run the algorithm across more input data.
In this case the input data is a fixed size so both algorithms are O(1) albeit with different constant factors.
If you took "n" to mean the number of bits in the numbers (i.e. you removed the restriction that it's a 64-bit long), then you could analyze for a given bit size n how do the algorithms scale.
In this scenario, the first would still be O(1) (see Qnan's comment), but the second would now be O(2^n).
I highly recommend watching the early lectures from MIT's "Introduction to Algorithms" course. They are a great explanation of Big O (and Big Omega/Theta) although do assume a good grasp of maths.
Checking every possible input is O(2^N) on the number of bits in the solution. When you make the number of bits constant, then both algorithms are O(1), you know how many solutions you need to check.
Fact: Every algorithm that you actually run on your computer is O(1), because the universe has finite computational power (there are finitely many atoms and finitely many seconds have passed since the Big Bang).
This is true, but not a very useful way to think about things. When we use big-O in practice, we generally assume that the constants involved are small relative to the asymptotic terms, because otherwise only giving the asymptotic term doesn't tell you much about how the algorithm performs. This works great in practice because the constants are usually things like "do I use an array or a hash map" which is at most about a 30x different, and the inputs are 10^6 or 10^9, so the difference between a quadratic and linear algorithm is more important than constant factors. Discussions of big-O that don't respect this convention (like algorithm #2) are pointless.
Whatever the value for a or b are, the worst case is still to check 2^64 or 2^32 or 2^somevalue values. This algorithm complexity is in O(2^k) time where k is the number of bits used to represent a long value, or O(1) time if we consider the values of a and b.

BigO running time on some methods

Ok, these are all pretty simple methods, and there are a few of them, so I didnt want to just create multiple questions when they are all the same thing. BigO is my weakness. I just cant figure out how they come up with these answers. Is there anyway you can give me some insight into your thinking for analyzing running times of some of these methods? How do you break it down? How should I think when I see something like these? (specifically the second one, I dont get how thats O(1))
function f1:
loop 3 times
loop n times
Therefore O(3*n) which is effectively O(n).
function f2:
loop 50 times
O(50) is effectively O(1).
We know it will loop 50 times because it will go until n = n - (n / 50) is 0. For this to be true, it must iterate 50 times (n - (n / 50)*50 = 0).
function f3:
loop n times
loop n times
Therefore O(n^2).
function f4:
recurse n times
You know this because worst case is that n = high - low + 1. Disregard the +1.
That means that n = high - low.
To terminate,
arr[hi] * arr[low] > 10
Assume that this doesn't occur until low is incremented to the highest it can go (high).
This means n = high - 0 and we must recurse up to n times.
function 5:
loops ceil(log_2(n)) times
We know this because of the m/=2.
For example, let n=10. log_2(10) = 3.3, the ceiling of which is 4.
10 / 2 =
5 / 2 =
2.5 / 2 =
1.25 / 2 =
0.75
In total, there are 4 iterations.
You get an n^2 analysis when performing a loop within a loop, such as the third method.
However, the first method doesn't a n^2 timing analysis because the first loop is defined as running three times. This makes the timing for the first one 3n, but we don't care about numbers for Big-O.
The second one, introduces an interesting paradigm, where despite the fact that you have a single loop, the timing analysis is still O(1). This is because if you were to chart the timing it takes to perform this method, it wouldn't behave as O(n) for smaller numbers. For larger numbers it becomes obvious.
For the fourth method, you have an O(n) timing because you're recursive function call is passing lo + 1. This is similar to if you were using a for loop and incrementing with lo++/++lo.
The last one has a O(log n) timing because your dividing your variable by two. Just remember than anything that reminds you of a binary search will have a log n timing.
There is also another trick to timing analysis. Say you had a loop within a loop, and within each of the two loops you were reading lines from a file or popping of elements from a stack. This actually would only be a O(n) method, because a file only has a certain number of lines you can read, and a stack only has a certain number of elements you can pop off.
The general idea of big-O notation is this: it gives a rough answer to the question "If you're given a set of N items, and you have to perform some operation repeatedly on these items, how many times will you need to perform this operation?" I say a rough answer, because it (most of the time) doesn't give a precise answer of "5*N+35", but just "N". It's like a ballpark. You don't really care about the precise answer, you just want to know how bad it will get when N gets large. So answers like O(N), O(N*N), O(logN) and O(N!) are typical, because they each represent sort of a "class" of answers, which you can compare to each other. An algorithm with O(N) will perform way better than an algorithm with O(N*N) when N gets large enough, it doesn't matter how lengthy the operation is itself.
So I break it down thus: First identify what the N will be. In the examples above it's pretty obvious - it's the size of the input array, because that determines how many times we will loop. Sometimes it's not so obvious, and sometimes you have multiple input data, so instead of just N you also get M and other letters (and then the answer is something like O(N*M*M)).
Then, when I have my N figured out, I try to identify the loop which depends on N. Actually, these two things often get identified together, as they are pretty much tied together.
And, lastly of course, I have to figure out how many iterations the program will make depending on N. And to make it easier, I don't really try to count them, just try to recognize the typical answers - O(1), O(N), O(N*N), O(logN), O(N!) or perhaps some other power of N. The O(N!) is actually pretty rare, because it's so inefficient, that implementing it would be pointless.
If you get an answer of something like N*N+N+1, then just discard the smaller ones, because, again, when N gets large, the others don't matter anymore. And ignore if the operation is repeated some fixed number of times. O(5*N) is the same as O(N), because it's the ballpark we're looking for.
Added: As asked in the comments, here are the analysis of the first two methods:
The first one is easy. There are only two loops, the inner one is O(N), and the outer one just repeats that 3 times. So it's still O(N). (Remember - O(3N) = O(N)).
The second one is tricky. I'm not really sure about it. After looking at it for a while I understood why it loops at most only 50 times. Since this is not dependant on N at all, it counts as O(1). However, if you were to pass it, say, an array of only 10 items, all positive, it would go into an infinite loop. That's O(∞), I guess. So which one is it? I don't know...
I don't think there's a formal way of determining the big-O number for an algorithm. It's like the halting problem. In fact, come to think of it, if you could universally determine the big-O for a piece of code, you could also determine if it ever halts or not, thus contradicting the halting problem. But that's just my musings.
Typically I just go by... dunno, sort of a "gut feeling". Once you "get" what the Big-O represents, it becomes pretty intuitive. But for complicated algorithms it's not always possible to determine. Take Quicksort for example. On average it's O(N*logN), but depending on the data it can degrade to O(N*N). The questions you'll get on the test though should have clear answers.
The second one is 50 because big O is a function of the length of the input. That is if the input size changes from 1 million to 1 billion, the runtime should increase by 1000 if the function is O(N) and 1 million if it's O(n^2). However the second function runs in time 50 regardless of the input length, so it's O(1). Technically it would be O(50) but constants don't matter for big O.

Categories