I've always taken it for granted that iterative search is the go-to method for finding maximum values in an unsorted list.
The thought came to me rather randomly, but in a nutshell: I believe I can accomplish the task in O(logn) time with n being the input array's size.
The approach piggy-backs on merge sort: divide and conquer.
Step 1: divide the findMax() task to two sub-tasks findMax(leftHalf) and findMax(rightHalf). This division should be finished in O(logn) time.
Step 2: merge the two maximum candidates back up. Each layer in this step should take constant time O(1), and there are, per the previous step, O(logn) such layers. So it should also be done in O(1) * O(logn) = O(logn) time (pardon the abuse of notation). This is so wrong. Each comparison is done in constant time, but there are 2^j/2 such comparisons to be done (2^j pairs of candidates at level j-th).
Thus, the whole task should be completed in O(logn) time. O(n) time.
However, when I try to time it, I get results that clearly reflect a linear O(n) running time.
size = 100000000 max = 0 time = 556
size = 200000000 max = 0 time = 1087
size = 300000000 max = 0 time = 1648
size = 400000000 max = 0 time = 1990
size = 500000000 max = 0 time = 2190
size = 600000000 max = 0 time = 2788
size = 700000000 max = 0 time = 3586
How come?
Here's the code (I left the arrays uninitialized to save on pre-processing time, the method, as far as I'd tested it, accurately identifies the maximum value in unsorted arrays):
public static short findMax(short[] list) {
return findMax(list, 0, list.length);
}
public static short findMax(short[] list, int start, int end) {
if(end - start == 1) {
return list[start];
}
else {
short leftMax = findMax(list, start, start+(end-start)/2);
short rightMax = findMax(list, start+(end-start)/2, end);
return (leftMax <= rightMax) ? (rightMax) : (leftMax);
}
}
public static void main(String[] args) {
for(int j=1; j < 10; j++) {
int size = j*100000000; // 100mil to 900mil
short[] x = new short[size];
long start = System.currentTimeMillis();
int max = findMax(x);
long end = System.currentTimeMillis();
System.out.println("size = " + size + "\t\t\tmax = " + max + "\t\t\t time = " + (end - start));
System.out.println();
}
}
You should count the number of comparisons that actually take place :
In the final step, after you find the maximum of the first n/2 numbers and last n/2 nubmers, you need 1 more comparison to find the maximum of the entire set of numbers.
On the previous step you have to find the maximum of the first and second groups of n/4 numbers and the maximum of the third and fourth groups of n/4 numbers, so you have 2 comparisons.
Finally, at the end of the recursion, you have n/2 groups of 2 numbers, and you have to compare each pair, so you have n/2 comparisons.
When you sum them all you get :
1 + 2 + 4 + ... + n/2 = n-1 = O(n)
You indeed create log(n) layers.
But at the end of the day, you still go through each element of every created bucket. Therefore you go through every element. So overall you are still O(n).
With Eran's answer, you already know what's wrong with your reasoning.
But anyway, there is a theorem called the Master Theorem, which aids in the running time analysis of recursive functions.
It verses on the following equation:
T(n) = a*T(n/b) + O(n^d)
Where T(n) is the running time for a problem of size n.
In your case, the recurrence equation would be T(n) = 2*T(n/2) + O(1) So a=2, b=2, and d=0. That is the case because, for each n-sized instance of your problem, you break it into 2 (a) subproblems, of size n / 2 (b), and combines them in O(1) = O(n^0).
The master theorem simply states three cases:
if a = b^d, then the total running time is O(n^d*log n)
if a < b^d, then the total running time is O(n^d)
if a > b^d, then the total running time is O(n^(log a / log b))
Your case matches the third, so the total running time is O(n^(log 2 / log 2)) = O(n)
It is a nice exercise to try to understand the reason behind these three cases. They are merely the cases for which:
1st) We do the same amount total work for each recursion level (this is the case of mergesort), so we only multiply the merging time, O(n^d), by the number of levels, log n.
2nd) We do less work for the second recursion level than for the first, and so on. Therefore the total work is basically the one for the last merge step (first recursion level), O(n^d).
3rd) We do more work for deeper levels (your case), so the running time is O(number of leaves in the recursion tree). In your case you have n leaves for the deeper recursion level, so O(n).
There are some short videos on a Stanford cousera course which are very nice to explain the Master Method, available https://www.coursera.org/course/algo. I believe you can always preview the course, even if not enrolled.
Related
Learning about algorithms and I am slightly puzzled when it comes to calculating Time Complexity. To my understanding, if the output of an algorithm does not depend on the input size, it takes constant time i.e. O(1). Whereas when it does depend on the input, it is known as linear time i.e. O(n).
However, how does the time complexity work out when we know the size of the input?
For example, I have the following code which prints out all the prime numbers between 1 and 100. In this scenario, I know the size of the input (100) so how would that translate to the Time Complexity?
public void findPrime(){
for(int i = 2; i <=100; i++){
boolean isPrime = true;
for(int j = 2; j < i; j++){
int x = i % j;
if(x == 0)
isPrime = false;
}
if (isPrime)
System.out.println(i);
}
}
In this case, would the complexity still be O(1) because the time is constant? Or would it be O(n) n being the i condition which affects the number of iterations for both for loops?
Am I also right in saying that the condition of i affects the algorithm the most in terms of run time? Greater the i, the longer the algorithm runs for?
Would appreciate any help.
The output is not dynamic and always the same (like the input), which is per definition a constant. The complexity of calculating that is constant, it's always the same. If the upper bound was not fixed, then the complexity wouldn't be constant.
To introduce a dynamic upper bound, we need to change the code and check out the complexities of the lines:
public void findPrime(int n){
for(int i = 2; i <= n; i++){ // sum from 2 to n
boolean isPrime = true; // 1
for(int j = 2; j < i; j++){ // sum from 2 to i - 1
int x = i % j; // 1
if(x == 0) // 1
isPrime = false; // 1
}
if (isPrime) // 1
System.out.println(i); // 1, see below
}
}
As the number i gets longer and longer, the complexity to print it is not constant. For simplicity, we say that printing out to System.out is constant.
Now when we know the complexities of the lines, we translate that into an equation and simplify it.
As the result is a polynomial, due to the properties of O notation, we can see that this function is O(n^2).
As the other answers have shown, you can also say it's O(n^2) by "locking at it". You need mathematical proofs only for more difficult cases (and to be sure).
If algorithm scalability depends on the input size, it's not always/necessarily only O(n2). It may be Qubic O(n3), Logarithmic O(log2(n)) or etc.
When algorithm doesn't depend on the input size, i.e. you have a constant amount of static operations which don't grow when your input grows - that algorithm is said to have a Constant Time Complexity which in asymptotic notation is O(1).
Usually, we want to measure Worst Cast Complexity for the algorithm, because that is what interests us for increasingly/sufficiently large inputs (for small inputs, mostly, it doesn't make any difference). So, the worst case is the case, when every possible iteration will execute/happen.
Now, pay attention to your double-for-loop. If you'll have your static range [2, 100] in your code, of course, if will always hit 3 as the first prime number, and every execution will have a Constant Time Complexity **O(1)**m but usually, we want to find prime numbers in some dynamically given range, and if that's the case, then, in the worst case, both loops may iterate over entire array, and as array grows - number of iterations, hence operations, will grow.
So, your code's worst-case time complexity is definitely O(n2).
Whereas when it does depend on the input, it is known as linear time i.e. O(n).
That's not true. When it depends on the input size, it is simply not constant.
It could be polynomial, meaning that it's complexity is represented as a polynom f(n).
Here, f(n) could be anything that is a polynom with parameter n - examples for this are:
f(n) = n - linear
f(n) = log(n) - logarithmic
f(n) = n*n - squared
...and so on
f(n) could also be an exponent, for example f(n) = 2^n, which represents an algorithm, which complexity grows very fast.
Time complexity denpend on what algorithm you use. You can calculate time complexity of an algorithm by using follow simple rules:
Primitive expression: 1
N primitive expressions: N
If you has 2 separate code blocks, 1st code block has time complexity is A, 2nd code block has time complexity is B, so total time complexity is A + B.
If you loop a code block N times, code block has time complexity is M, so total time complexity is N*M
If you use recursive function, you can calculate time complexity by using Master theorem: https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)
Big O notation is a mathematical notation (https://en.wikipedia.org/wiki/Big_O_notation) describes the bound of a function. Time complexity is usually a function of input size, so, we can use big O notation to describe bound of time complexity. Some simple rules:
constant = O(constant) = O(1)
n = O(n)
n^2 = O(n^2)
...
g(a*f(n)) = O(f(n)) with a is a constant.
O(f(n) + g(n)) = O(max(f(n), g(n))
...
I am taking a Java course in university and my notes give me 3 methods for calculating the sum of an ArrayList. First using iteration, second using recursion, and third using array split combine with recursion.
My question is how do I test the efficiency of these algorithms? As it is, I think the number of steps it takes for the algorithm to compute the value is what tells you the efficiency of the algorithm.
My Code for the 3 algorithms:
import java.util.ArrayList;
public class ArraySumTester {
static int steps = 1;
public static void main(String[] args) {
ArrayList<Integer> numList = new ArrayList<Integer>();
numList.add(1);
numList.add(2);
numList.add(3);
numList.add(4);
numList.add(5);
System.out.println("------------------------------------------");
System.out.println("Recursive array sum = " + ArraySum(numList));
System.out.println("------------------------------------------");
steps = 1;
System.out.println("Iterative array sum = " + iterativeSum(numList));
System.out.println("------------------------------------------");
steps = 1;
System.out.println("Array sum using recursive array split : " + sumArraySplit(numList));
}
static int ArraySum(ArrayList<Integer> list) {
return sumHelper(list, 0);
}
static int sumHelper(ArrayList<Integer> list, int start) {
// System.out.println("Start : " + start);
System.out.println("Rescursive step : " + steps++);
if (start >= list.size())
return 0;
else
return list.get(start) + sumHelper(list, start + 1);
}
static int iterativeSum(ArrayList<Integer> list) {
int sum = 0;
for (Integer item : list) {
System.out.println("Iterative step : " + steps++);
sum += item;
}
return sum;
}
static int sumArraySplit(ArrayList<Integer> list) {
int start = 0;
int end = list.size();
int mid = (start + end) / 2;
System.out.println("Rescursive step : " + steps++);
//System.out.println("Start : " + start + ", End : " + end + ", Mid : " + mid);
//System.out.println(list);
if (list.size() <= 1)
return list.get(0);
else
return sumArraySplit(new ArrayList<Integer>(list.subList(0, mid)))
+ sumArraySplit(new ArrayList<Integer>(list.subList(mid,
end)));
}
}
Output:
------------------------------------------
Rescursive step : 1
Rescursive step : 2
Rescursive step : 3
Rescursive step : 4
Rescursive step : 5
Rescursive step : 6
Recursive array sum = 15
------------------------------------------
Iterative step : 1
Iterative step : 2
Iterative step : 3
Iterative step : 4
Iterative step : 5
Iterative array sum = 15
------------------------------------------
Rescursive step : 1
Rescursive step : 2
Rescursive step : 3
Rescursive step : 4
Rescursive step : 5
Rescursive step : 6
Rescursive step : 7
Rescursive step : 8
Rescursive step : 9
Array sum using recursive array split : 15
Now from the above output the recursive array split algorithm takes the most steps, however according to my notes, it is as efficient as the iterative algorithm. So which is incorrect my code or my notes?
Do you just want to look at speed of execution? If so, you'll want to look at microbenchmarking:
How do I write a correct micro-benchmark in Java?
Essentially because of how the JVM and modern processors work, you won't get consistent results by running something a million times in a FOR loop and measuring the execution speed with a system timer (EDIT).
That said, "efficiency" can also mean other things like memory consumption. For instance, any recursive method runs a risk of a stack overflow, the issue this site is named after :) Try giving that ArrayList tens of thousands of elements and see what happens.
Using System.currentTimeMillis() is the way to go. Define a start variable before your code and an end variable after it completes. The difference of these will be the time elapsed for your program to execute. The shortest time will be the most efficient.
long start = System.currentTimeMillis();
// Program to test
long end = System.currentTimeMillis();
long diff = end - start;
I suggest that you look at the running time and space complexity (these are more computer sciencey names for efficiency) of these algorithms in the abstract. This is what the so-called Big-Oh notation is for.
To be exact, of course, after making the implementations as tight and side-effect-free as possible, you should consider writing microbenchmarks.
Since you have to be able to read the value of every element of the list in order to sum these elements up, no algorithm is going to perform better than a (linear) O(n) time, O(1) space algorithm (which is what your iterative algorithm does) in the general case (i.e. without any other assumptions). Here n is the size of the input (i.e. the number of elements in the list). Such an algorithm is said to have a linear time and constant space complexity meaning its running time increases as the size of the list increases, but it does not need any additional memory; in fact it needs some constant memory to do its job.
The other two recursive algorithms, can, at best, perform as well as this simple algorithm because the iterative algorithm does not have any complications (additional memory on the stack, for instance) that recursive algorithms suffer with.
This gets reflected into what are called the constant terms of the algorithms that have the same O(f(n)) running time. For instance, if you somehow found an algorithm which examines roughly half the elements of a list to solve a problem, whereas another algorithm must see all the elements, then, the first algorithm has better constant terms than the second and is expected to beat it in practice, although both these algorithms have a time complexity of O(n).
Now, it is quite possible to parallelize the solution to this problem by splitting the giant list into smaller lists (you can achieve the effect via indexes into a single list) and then use a parallel summing operation which may beat other algorithms if the list is sufficiently long. This is because each non-overlapping interval can be summed up in parallel (at the same time) and you'd sum the partial sums up in the end. But this is not a possibility we are considering in the current context.
I would say to use the Guava Google Core Libraries For Java Stopwatch. Example:
Stopwatch stopwatch = Stopwatch.createStarted();
// TODO: Your tests here
long elapsedTime = stopwatch.stop().elapsed(TimeUnit.MILLISECONDS);
You get the elapsed in whatever unit you need and plus you don't need any extra calculations.
If you want to consider efficiency then you really need to look at algorithm structure rather than timing.
Load the sources for the methods you are using, dive into the structure and look for looping - that will give you the correct measure of efficiency.
I was trying to graph the Time Complexity of ArrayList's remove(element) method.
My understanding is that it should be O(N), however, its giving me a O(1). Can anyone point out what i did wrong here??
Thank you in advance.
public static void arrayListRemoveTiming(){
long startTime, midPointTime, stopTime;
// Spin the computer until one second has gone by, this allows this
// thread to stabilize;
startTime = System.nanoTime();
while (System.nanoTime() - startTime < 1000000000) {
}
long timesToLoop = 100000;
int N;
ArrayList<Integer> list = new ArrayList<Integer>();
// Fill up the array with 0 to 10000
for (N = 0; N < timesToLoop; N++)
list.add(N);
startTime = System.nanoTime();
for (int i = 0; i < list.size() ; i++) {
list.remove(i);
midPointTime = System.nanoTime();
// Run an Loop to capture other cost.
for (int j = 0; j < timesToLoop; j++) {
}
stopTime = System.nanoTime();
// Compute the time, subtract the cost of running the loop
// from the cost of running the loop.
double averageTime = ((midPointTime - startTime) - (stopTime - midPointTime))
/ timesToLoop;
System.out.println(averageTime);
}
}
The cost of a remove is O(n) as you have to shuffle the elements to the "right" of that point "left" by one:
Delete D
|
V
+-----+-----+-----+-----+-----+-----+-----+
| A | B | C | D | E | F | G |
+-----+-----+-----+-----+-----+-----+-----+
<------------------
Move E, F, G left
If your test code is giving you O(1) then I suspect you're not measuring it properly :-)
The OpenJDK source, for example, has this:
public E remove(int index) {
rangeCheck(index);
modCount++;
E oldValue = elementData(index);
int numMoved = size - index - 1;
if (numMoved > 0)
System.arraycopy(elementData, index+1, elementData, index, numMoved);
elementData[--size] = null; // Let gc do its work
return oldValue;
}
and the System.arraycopy is the O(n) cost for this function.
In addition, I'm not sure you've thought this through very well:
for (int i = 0; i < list.size() ; i++)
list.remove(i);
This is going to remove the following elements from the original list:
0, 2, 4, 8
and so on, because the act of removing element 0 shifts all other elements left - the item that was originally at offset 1 will be at offset 0 when you've deleted the original offset 0, and you then move on to delete offset 1.
First off, you are not measuring complexity in this code. What you are doing is measuring (or attempting to measure) performance. When you graph the numbers (assuming that they are correctly measured) you get a performance curve for a particular use-case over a finite range of values for your scaling variable.
That is not the same as a computational complexity measure; i.e. big O, or related Bachman-Landau notations. These are about mathematical limits as the scaling variable tends to infinity.
And this is not just a nitpick. It is quite easy to construct examples1 where performance characteristics change markedly as N gets very large.
What are doing when you graph performance over a range of values and fit a curve is to estimate the complexity.
1 - And a real example is the average complexity of various HashMap functions which switch from O(1) to O(N) (with a very small C) when N reaches 2^31. The modality is because the hash array cannot grow beyond 2^31 slots.
The second point is that that the complexity of ArrayList.remove(index) is sensitive to the value of index as well as the list length.
The "advertised" complexity of O(N) for the average and worst cases.
In the best case, the complexity is actually O(1). Really!
This happens when you remove the last element of the list; i.e. index == list.size() - 1. That can be performed with zero copying; look at the code that #paxdiablo included in his Answer.
Now to your Question. There are a number of reasons why your code could give incorrect measurements. For example:
You are not taking account of JIT compilation overheads and other JVM warmup effects.
I can see places where the JIT compiler could potentially optimize away entire loops.
The way you are measuring the time is strange. Try treating this as algebra.
((midPoint - start) - (stop - midPoint)) / count;
Now simplify ... and the midPoint term cancels out.
You are only removing half of the elements from the list, so you only measuring over the range 50,000 to 100,000 of your scaling variable. (And I expect you are then plotting against the scaling variable; i.e. you are plotting f(N + 5000) against N.
The time intervals you are measuring could be too small for the clock resolution on your machine. (Read the javadocs for nanoTime() to see what resolution it guarantees.)
I recommend that people wanting to avoid mistakes like the above should read:
How do I write a correct micro-benchmark in Java?
remove(int) removes the element at the ith INDEX which is O(1)
You probably want remove( Object ) which is O(N) you would need to call remove(Integer.valueOf(i))
it would be more obvious if your list didn't have the elements in order
How does one find the time complexity of a given algorithm notated both in N and Big-O? For example,
//One iteration of the parameter - n is the basic variable
void setUpperTriangular (int intMatrix[0,…,n-1][0,…,n-1]) {
for (int i=1; i<n; i++) { //Time Complexity {1 + (n+1) + n} = {2n + 2}
for (int j=0; j<i; j++) { //Time Complexity {1 + (n+1) + n} = {2n + 2}
intMatrix[i][j] = 0; //Time complexity {n}
}
} //Combining both, it would be {2n + 2} * {2n + 2} = 4n^2 + 4n + 4 TC
} //O(n^2)
Is the Time Complexity for this O(n^2) and 4n^2 + 4n + 4? If not, how did you get to your answer?
Also, I have a question about a two-param matrix with time complexity.
//Two iterations in the parameter, n^2 is the basic variable
void division (double dividend [0,…,n-1], double divisor [0,…,n-1])
{
for (int i=0; i<n; i++) { //TC {1 + (n^2 + 1) + n^2} = {2n^2 + 2}
if (divisor[i] != 0) { //TC n^2
for (int j=0; j<n; j++) { //TC {1 + (n^2 + 1) + n^2} = {2n^2 + 2}
dividend[j] = dividend[j] / divisor[i]; //TC n^2
}
}
} //Combining all, it would be {2n^2 + 2} + n^2(2n^2 + 2) = 2n^3 + 4n^2 + 2 TC
} //O(n^3)
Would this one be O(N^3) and 2n^3 + 4n^2 + 2? Again, if not, can somebody please explain why?
Both are O(N2). You are processing N2 items in the worst case.
The second example might be just O(N) in the best case (if the second argument is all zeros).
I am not sure how you get the other polynomials. Usually the exact complexity is of no importance (namely when working with higher-level language).
What you're looking for in big O time complexity is the approximate number of times an instruction is executed. So, in the first function, you have the executable statement:
intMatrix[i][j] = 0;
Since the executable statement takes the same amount of time every time, it is O(1). So, for the first function, you can cut it down to look like this and work back from the executable statement:
i: execute n times{//Time complexity=n*(n+1)/2
j: execute i times{
intMatrix[i][j] = 0; //Time complexity=1
}
}
Working back, the both the i loop executes n times and the j loop executes i times. For example, if n = 5, the number of instructions executed would be 5+4+3+2+1=15. This is an arithmetic series, and can be represented by n(n+1)/2. The time complexity of the function is therefore n(n+1)/2=n^2/2+n/2=O(n^2).
For the second function, you're looking at something similar. Your executable statement is:
dividend[j] = dividend[j] / divisor[i];
Now, with this statement it's a little more complicated, as you can see from wikipedia, complexity of schoolbook long division is O(n^2). However, the dividend and divisor DO NOT use your variable n, so they're not dependent on it. Let's call the dividend and divisor, aka the actual contents of the matrix "m". So the time complexity of the executable statement is O(m^2). Moving on to simplify the second function:
i: execute n times{ //Time complexity=n*(n*(1*m^2))=O(n^2*m^2)
j: execute n times{ //Time complexity=n*(1*m^2)
if statement: execute ONCE{ //Time complexity=1*m^2
dividend[j] = dividend[j] / divisor[i]; //Time complexity=m^2
}
}
}
Working backwards, you can see that the inner statement will take O(m^2), and since the if statement takes the same amount of time every time, its time complexity is O(1). Your final answer is then O(n^2m^2). Since division takes so little time on modern processors, it is usually estimated at O(1) (see this for a better explanation for why this is), what your professor is probably looking for O(n^2) for the second function.
Big O Notation or time complexity, describes the relationship between a change in data size (n), and the magnitude of time / space required for a given algorithm to process it.
In your case you have two loops. For each number of n (the outer loop), you process n items (in the inner loop) items. Thus in you have O(n2) or "quadratic" time complexity.
So for small numbers of n the difference is negligible, but for larger numbers of n, it quickly grinds to a halt.
Eliminating 0 from the divisor as in algorithm 2 does not significantly change the time complexity, because checking to see if a number = 0 is O(1) and several orders of magnitude less then O(n2). Eliminating the inner loop in that specific case is still O(n), and still dwarfed by the time it takes to do the O(n2). Your second algorithm, thus technically becomes (best case) O(n) (if there are only zeros in the divisor series).
I've been doing some questions but answers not provided so I was wondering if my answers are correct
a) given that a[i....j] is an integer array with n elements and x is an integer
int front, back;
while(i <= j) {
front = (i + j) / 3;
back = 2 * (i + j) / 3;
if(a[front] == x)
return front;
if (a[back] ==x)
return back;
if(x < a[front])
j = front - 1;
else if(x > a[back])
i = back+1;
else {
j = back-1;
i = front + 1;
}
}
My answer would be O(1) but I have a feeling I'm wrong.
B)
public static void whatIs(int n) {
if (n > 0)
System.out.print(n+" "+whatIs(n/2)+" "+whatIs(n/2));
}
ans: I'm not sure whether is it log4n or logn since recursion happens twice.
A) Yes. O(1) is wrong. You are going around the loop a number of times that depends on i, j, x ... and the contents of the array. Work out how many times you go around the loop in the best and worst cases.
B) Simplify log(4*n) using log(a*b) -> log(a) + log(b) (basic high-school mathematics) and then apply the definition of big O.
But that isn't the right answer either. Once again, you should go back to first principles and count the number of times that the method gets called for a given value of the parameter n. And do a proof by induction.
Both answers are incorrect.
In the first example on each iteration either you find the number or you shrink the length of the interval by 1/3. I.e. if the length used to be n you make it (2/3)*n. Worst case you find x on the last iteration - when the length of the interval is 1. So just like with binary search the complexity is calculated via a log: the complexity is O(log3/2(n)) and this in fact is simply O(log(n))
In the second example for a given number n you perform twice the number of operations needed for n/2. Start from n = 0 and n = 1 and use induction to prove the complexity is in fact O(n).
Hope this helps.
A) This algorithm seems similar to the Golden section search. When analyzing complexity, it's sometimes easier to imagine what would happen if we would extend the data structure, rather than contracting it. Think of it like this: Every loop removes a third from the search interval. That means, that if we know exactly how long time it takes for a certain length, we could add 50% more if we're allowed to loop once more – an exponential growth. Thus, the search algorithm must have complexity O(log n).
B) Every time we add a "layer" of function calls, we need to double the number of them (since the function always calls itself twice). In other words, given a certain length and time consumption, doubling n also requires twice as many function calls in the last layer. The algorithm is O(n).