Java : Testing Array Sum Algorithm Efficiency - java

I am taking a Java course in university and my notes give me 3 methods for calculating the sum of an ArrayList. First using iteration, second using recursion, and third using array split combine with recursion.
My question is how do I test the efficiency of these algorithms? As it is, I think the number of steps it takes for the algorithm to compute the value is what tells you the efficiency of the algorithm.
My Code for the 3 algorithms:
import java.util.ArrayList;
public class ArraySumTester {
static int steps = 1;
public static void main(String[] args) {
ArrayList<Integer> numList = new ArrayList<Integer>();
numList.add(1);
numList.add(2);
numList.add(3);
numList.add(4);
numList.add(5);
System.out.println("------------------------------------------");
System.out.println("Recursive array sum = " + ArraySum(numList));
System.out.println("------------------------------------------");
steps = 1;
System.out.println("Iterative array sum = " + iterativeSum(numList));
System.out.println("------------------------------------------");
steps = 1;
System.out.println("Array sum using recursive array split : " + sumArraySplit(numList));
}
static int ArraySum(ArrayList<Integer> list) {
return sumHelper(list, 0);
}
static int sumHelper(ArrayList<Integer> list, int start) {
// System.out.println("Start : " + start);
System.out.println("Rescursive step : " + steps++);
if (start >= list.size())
return 0;
else
return list.get(start) + sumHelper(list, start + 1);
}
static int iterativeSum(ArrayList<Integer> list) {
int sum = 0;
for (Integer item : list) {
System.out.println("Iterative step : " + steps++);
sum += item;
}
return sum;
}
static int sumArraySplit(ArrayList<Integer> list) {
int start = 0;
int end = list.size();
int mid = (start + end) / 2;
System.out.println("Rescursive step : " + steps++);
//System.out.println("Start : " + start + ", End : " + end + ", Mid : " + mid);
//System.out.println(list);
if (list.size() <= 1)
return list.get(0);
else
return sumArraySplit(new ArrayList<Integer>(list.subList(0, mid)))
+ sumArraySplit(new ArrayList<Integer>(list.subList(mid,
end)));
}
}
Output:
------------------------------------------
Rescursive step : 1
Rescursive step : 2
Rescursive step : 3
Rescursive step : 4
Rescursive step : 5
Rescursive step : 6
Recursive array sum = 15
------------------------------------------
Iterative step : 1
Iterative step : 2
Iterative step : 3
Iterative step : 4
Iterative step : 5
Iterative array sum = 15
------------------------------------------
Rescursive step : 1
Rescursive step : 2
Rescursive step : 3
Rescursive step : 4
Rescursive step : 5
Rescursive step : 6
Rescursive step : 7
Rescursive step : 8
Rescursive step : 9
Array sum using recursive array split : 15
Now from the above output the recursive array split algorithm takes the most steps, however according to my notes, it is as efficient as the iterative algorithm. So which is incorrect my code or my notes?

Do you just want to look at speed of execution? If so, you'll want to look at microbenchmarking:
How do I write a correct micro-benchmark in Java?
Essentially because of how the JVM and modern processors work, you won't get consistent results by running something a million times in a FOR loop and measuring the execution speed with a system timer (EDIT).
That said, "efficiency" can also mean other things like memory consumption. For instance, any recursive method runs a risk of a stack overflow, the issue this site is named after :) Try giving that ArrayList tens of thousands of elements and see what happens.

Using System.currentTimeMillis() is the way to go. Define a start variable before your code and an end variable after it completes. The difference of these will be the time elapsed for your program to execute. The shortest time will be the most efficient.
long start = System.currentTimeMillis();
// Program to test
long end = System.currentTimeMillis();
long diff = end - start;

I suggest that you look at the running time and space complexity (these are more computer sciencey names for efficiency) of these algorithms in the abstract. This is what the so-called Big-Oh notation is for.
To be exact, of course, after making the implementations as tight and side-effect-free as possible, you should consider writing microbenchmarks.
Since you have to be able to read the value of every element of the list in order to sum these elements up, no algorithm is going to perform better than a (linear) O(n) time, O(1) space algorithm (which is what your iterative algorithm does) in the general case (i.e. without any other assumptions). Here n is the size of the input (i.e. the number of elements in the list). Such an algorithm is said to have a linear time and constant space complexity meaning its running time increases as the size of the list increases, but it does not need any additional memory; in fact it needs some constant memory to do its job.
The other two recursive algorithms, can, at best, perform as well as this simple algorithm because the iterative algorithm does not have any complications (additional memory on the stack, for instance) that recursive algorithms suffer with.
This gets reflected into what are called the constant terms of the algorithms that have the same O(f(n)) running time. For instance, if you somehow found an algorithm which examines roughly half the elements of a list to solve a problem, whereas another algorithm must see all the elements, then, the first algorithm has better constant terms than the second and is expected to beat it in practice, although both these algorithms have a time complexity of O(n).
Now, it is quite possible to parallelize the solution to this problem by splitting the giant list into smaller lists (you can achieve the effect via indexes into a single list) and then use a parallel summing operation which may beat other algorithms if the list is sufficiently long. This is because each non-overlapping interval can be summed up in parallel (at the same time) and you'd sum the partial sums up in the end. But this is not a possibility we are considering in the current context.

I would say to use the Guava Google Core Libraries For Java Stopwatch. Example:
Stopwatch stopwatch = Stopwatch.createStarted();
// TODO: Your tests here
long elapsedTime = stopwatch.stop().elapsed(TimeUnit.MILLISECONDS);
You get the elapsed in whatever unit you need and plus you don't need any extra calculations.

If you want to consider efficiency then you really need to look at algorithm structure rather than timing.
Load the sources for the methods you are using, dive into the structure and look for looping - that will give you the correct measure of efficiency.

Related

Counting all permutations of a string (Cracking the Coding Interview, Chapter VI - Example 12)

In Gayle Laakman's book "Cracking the Coding Interview", chapter VI (Big O), example 12, the problem states that given the following Java code for computing a string's permutations, it is required to compute the code's complexity
public static void permutation(String str) {
permutation(str, "");
}
public static void permutation(String str, String prefix) {
if (str.length() == 0) {
System.out.println(prefix);
} else {
for (int i = 0; i < str.length(); i++) {
String rem = str.substring(0, i) + str.substring(i + 1);
permutation(rem, prefix + str.charAt(i));
}
}
}
The book assumes that since there will be n! permutations, if we consider each of the permutations to be a leaf in the call tree, where each of the leaves is attached to a path of length n, then there will be no more that n*n! nodes in the tree (i.e.: the number of calls is no more than n*n!).
But shouldn't the number of nodes be:
since the number of calls is equivalent to the number of nodes (take a look at the figure in the video Permutations Of String | Code Tutorial by Quinston Pimenta).
If we follow this method, the number of nodes will be 1 (for the first level/root of the tree) + 3 (for the second level) + 3*2 (for the third level) + 3*2*1 (for the fourth/bottom level)
i.e.: the number of nodes = 3!/3! + 3!/2! + 3!/1! + 3!/0! = 16
However, according to the aforementioned method, the number of nodes will be 3*3! = 18
Shouldn't we count shared nodes in the tree as one node, since they express one function call?
You're right about the number of nodes. That formula gives the exact number, but the method in the book counts some multiple times.
Your sum also seems to be approach e * n! for large n, so can be simplified to O(n!).
It's still technically correct to say the number of calls is no more than n * n!, as this is a valid upper bound. Depending on how this is used, this can be fine, and may be easier prove.
For the time complexity, we need to multiply by the average work done for each node.
First, check the String concatenation. Each iteration creates 2 new Strings to pass to the next node. The length of one String increases by 1, and the length of the other decreases by 1, but the total length is always n, giving a time complexity of O(n) for each iteration.
The number of iterations varies for each level, so we can't just multiply by n. Instead look at the total number of iterations for the whole tree, and get the average for each node. With n = 3:
The 1 node in the first level iterates 3 times: 1 * 3 = 3
The 3 nodes in the second level iterate 2 times: 3 * 2 = 6
The 6 nodes in the third level iterate 1 time: 6 * 1 = 6
The total number of iterations is: 3 + 6 + 6 = 15. This is about the same as number of nodes in the tree. So the average number of iterations for each node is constant.
In total, we have O(n!) iterations that each do O(n) work giving a total time complexity of O(n * n!).
According to your video where we have string with 3 characters (ABC), the number of permutations is 6 = 3!, and 6 happens to be equal to 1 + 2 + 3. However, if we have a string with 4 characters (ABCD), the number of permutations should be 4 * 3! as D could be in any position from 1 to 4. With each position of D you can generate 3! permutations for the rest. If you re-draw the tree and count the number of permutations you will see the difference.
According to your code, we have n! = str.length()! permutations, but in each call of the permutations, you also run a loop from 0 to n-1. Therefore, you have O(n * n!).
Update in response to the edited question
Firstly, in programming, we often have either 0->n-1 or 1->n not 0->n.
Secondly, we don't count the number of nodes in this case as if you take a look at the recursion tree in the clip again, you will see nodes duplicated. The permutations in this case should be the number of leaves which are unique among each other.
For instance, if you have a string with 4 characters, the number of leaves should be 4 * 3! = 24 and it would be the number of permutations. However, in your code snippet, you also have a 0->n-1 = 0->3 loop in each permutation, so you need to count the loops in. Thus, your code complexity in this case is O(n *n!) = O(4 * 4!).

Find all the ways you can go up an n step staircase if you can take k steps at a time such that k <= n

This is a problem I'm trying to solve on my own to be a bit better at recursion(not homework). I believe I found a solution, but I'm not sure about the time complexity (I'm aware that DP would give me better results).
Find all the ways you can go up an n step staircase if you can take k steps at a time such that k <= n
For example, if my step sizes are [1,2,3] and the size of the stair case is 10, I could take 10 steps of size 1 [1,1,1,1,1,1,1,1,1,1]=10 or I could take 3 steps of size 3 and 1 step of size 1 [3,3,3,1]=10
Here is my solution:
static List<List<Integer>> problem1Ans = new ArrayList<List<Integer>>();
public static void problem1(int numSteps){
int [] steps = {1,2,3};
problem1_rec(new ArrayList<Integer>(), numSteps, steps);
}
public static void problem1_rec(List<Integer> sequence, int numSteps, int [] steps){
if(problem1_sum_seq(sequence) > numSteps){
return;
}
if(problem1_sum_seq(sequence) == numSteps){
problem1Ans.add(new ArrayList<Integer>(sequence));
return;
}
for(int stepSize : steps){
sequence.add(stepSize);
problem1_rec(sequence, numSteps, steps);
sequence.remove(sequence.size()-1);
}
}
public static int problem1_sum_seq(List<Integer> sequence){
int sum = 0;
for(int i : sequence){
sum += i;
}
return sum;
}
public static void main(String [] args){
problem1(10);
System.out.println(problem1Ans.size());
}
My guess is that this runtime is k^n where k is the numbers of step sizes, and n is the number of steps (3 and 10 in this case).
I came to this answer because each step size has a loop that calls k number of step sizes. However, the depth of this is not the same for all step sizes. For instance, the sequence [1,1,1,1,1,1,1,1,1,1] has more recursive calls than [3,3,3,1] so this makes me doubt my answer.
What is the runtime? Is k^n correct?
TL;DR: Your algorithm is O(2n), which is a tighter bound than O(kn), but because of some easily corrected inefficiencies the implementation runs in O(k2 × 2n).
In effect, your solution enumerates all of the step-sequences with sum n by successively enumerating all of the viable prefixes of those step-sequences. So the number of operations is proportional to the number of step sequences whose sum is less than or equal to n. [See Notes 1 and 2].
Now, let's consider how many possible prefix sequences there are for a given value of n. The precise computation will depend on the steps allowed in the vector of step sizes, but we can easily come up with a maximum, because any step sequence is a subset of the set of integers from 1 to n, and we know that there are precisely 2n such subsets.
Of course, not all subsets qualify. For example, if the set of step-sizes is [1, 2], then you are enumerating Fibonacci sequences, and there are O(φn) such sequences. As k increases, you will get closer and closer to O(2n). [Note 3]
Because of the inefficiencies in your coded, as noted, your algorithm is actually O(k2 αn) where α is some number between φ and 2, approaching 2 as k approaches infinity. (φ is 1.618..., or (1+sqrt(5))/2)).
There are a number of improvements that could be made to your implementation, particularly if your intent was to count rather than enumerate the step sizes. But that was not your question, as I understand it.
Notes
That's not quite exact, because you actually enumerate a few extra sequences which you then reject; the cost of these rejections is a multiplier by the size of the vector of possible step sizes. However, you could easily eliminate the rejections by terminating the for loop as soon as a rejection is noticed.
The cost of an enumeration is O(k) rather than O(1) because you compute the sum of the sequence arguments for each enumeration (often twice). That produces an additional factor of k. You could easily eliminate this cost by passing the current sum into the recursive call (which would also eliminate the multiple evaluations). It is trickier to avoid the O(k) cost of copying the sequence into the output list, but that can be done using a better (structure-sharing) data-structure.
The question in your title (as opposed to the problem solved by the code in the body of your question) does actually require enumerating all possible subsets of {1…n}, in which case the number of possible sequences would be exactly 2n.
If you want to solve this recursively, you should use a different pattern that allows caching of previous values, like the one used when calculating Fibonacci numbers. The code for Fibonacci function is basically about the same as what do you seek, it adds previous and pred-previous numbers by index and returns the output as current number. You can use the same technique in your recursive function , but add not f(k-1) and f(k-2), but gather sum of f(k-steps[i]). Something like this (I don't have a Java syntax checker, so bear with syntax errors please):
static List<Integer> cache = new ArrayList<Integer>;
static List<Integer> storedSteps=null; // if used with same value of steps, don't clear cache
public static Integer problem1(Integer numSteps, List<Integer> steps) {
if (!ArrayList::equal(steps, storedSteps)) { // check equality data wise, not link wise
storedSteps=steps; // or copy with whatever method there is
cache.clear(); // remove all data - now invalid
// TODO make cache+storedSteps a single structure
}
return problem1_rec(numSteps,steps);
}
private static Integer problem1_rec(Integer numSteps, List<Integer> steps) {
if (0>numSteps) { return 0; }
if (0==numSteps) { return 1; }
if (cache.length()>=numSteps+1) { return cache[numSteps] } // cache hit
Integer acc=0;
for (Integer i : steps) { acc+=problem1_rec(numSteps-i,steps); }
cache[numSteps]=acc; // cache miss. Make sure ArrayList supports inserting by index, otherwise use correct type
return acc;
}

Explain Time Complexity?

How does one find the time complexity of a given algorithm notated both in N and Big-O? For example,
//One iteration of the parameter - n is the basic variable
void setUpperTriangular (int intMatrix[0,…,n-1][0,…,n-1]) {
for (int i=1; i<n; i++) { //Time Complexity {1 + (n+1) + n} = {2n + 2}
for (int j=0; j<i; j++) { //Time Complexity {1 + (n+1) + n} = {2n + 2}
intMatrix[i][j] = 0; //Time complexity {n}
}
} //Combining both, it would be {2n + 2} * {2n + 2} = 4n^2 + 4n + 4 TC
} //O(n^2)
Is the Time Complexity for this O(n^2) and 4n^2 + 4n + 4? If not, how did you get to your answer?
Also, I have a question about a two-param matrix with time complexity.
//Two iterations in the parameter, n^2 is the basic variable
void division (double dividend [0,…,n-1], double divisor [0,…,n-1])
{
for (int i=0; i<n; i++) { //TC {1 + (n^2 + 1) + n^2} = {2n^2 + 2}
if (divisor[i] != 0) { //TC n^2
for (int j=0; j<n; j++) { //TC {1 + (n^2 + 1) + n^2} = {2n^2 + 2}
dividend[j] = dividend[j] / divisor[i]; //TC n^2
}
}
} //Combining all, it would be {2n^2 + 2} + n^2(2n^2 + 2) = 2n^3 + 4n^2 + 2 TC
} //O(n^3)
Would this one be O(N^3) and 2n^3 + 4n^2 + 2? Again, if not, can somebody please explain why?
Both are O(N2). You are processing N2 items in the worst case.
The second example might be just O(N) in the best case (if the second argument is all zeros).
I am not sure how you get the other polynomials. Usually the exact complexity is of no importance (namely when working with higher-level language).
What you're looking for in big O time complexity is the approximate number of times an instruction is executed. So, in the first function, you have the executable statement:
intMatrix[i][j] = 0;
Since the executable statement takes the same amount of time every time, it is O(1). So, for the first function, you can cut it down to look like this and work back from the executable statement:
i: execute n times{//Time complexity=n*(n+1)/2
j: execute i times{
intMatrix[i][j] = 0; //Time complexity=1
}
}
Working back, the both the i loop executes n times and the j loop executes i times. For example, if n = 5, the number of instructions executed would be 5+4+3+2+1=15. This is an arithmetic series, and can be represented by n(n+1)/2. The time complexity of the function is therefore n(n+1)/2=n^2/2+n/2=O(n^2).
For the second function, you're looking at something similar. Your executable statement is:
dividend[j] = dividend[j] / divisor[i];
Now, with this statement it's a little more complicated, as you can see from wikipedia, complexity of schoolbook long division is O(n^2). However, the dividend and divisor DO NOT use your variable n, so they're not dependent on it. Let's call the dividend and divisor, aka the actual contents of the matrix "m". So the time complexity of the executable statement is O(m^2). Moving on to simplify the second function:
i: execute n times{ //Time complexity=n*(n*(1*m^2))=O(n^2*m^2)
j: execute n times{ //Time complexity=n*(1*m^2)
if statement: execute ONCE{ //Time complexity=1*m^2
dividend[j] = dividend[j] / divisor[i]; //Time complexity=m^2
}
}
}
Working backwards, you can see that the inner statement will take O(m^2), and since the if statement takes the same amount of time every time, its time complexity is O(1). Your final answer is then O(n^2m^2). Since division takes so little time on modern processors, it is usually estimated at O(1) (see this for a better explanation for why this is), what your professor is probably looking for O(n^2) for the second function.
Big O Notation or time complexity, describes the relationship between a change in data size (n), and the magnitude of time / space required for a given algorithm to process it.
In your case you have two loops. For each number of n (the outer loop), you process n items (in the inner loop) items. Thus in you have O(n2) or "quadratic" time complexity.
So for small numbers of n the difference is negligible, but for larger numbers of n, it quickly grinds to a halt.
Eliminating 0 from the divisor as in algorithm 2 does not significantly change the time complexity, because checking to see if a number = 0 is O(1) and several orders of magnitude less then O(n2). Eliminating the inner loop in that specific case is still O(n), and still dwarfed by the time it takes to do the O(n2). Your second algorithm, thus technically becomes (best case) O(n) (if there are only zeros in the divisor series).

Finding maximum in O(logn) time?

I've always taken it for granted that iterative search is the go-to method for finding maximum values in an unsorted list.
The thought came to me rather randomly, but in a nutshell: I believe I can accomplish the task in O(logn) time with n being the input array's size.
The approach piggy-backs on merge sort: divide and conquer.
Step 1: divide the findMax() task to two sub-tasks findMax(leftHalf) and findMax(rightHalf). This division should be finished in O(logn) time.
Step 2: merge the two maximum candidates back up. Each layer in this step should take constant time O(1), and there are, per the previous step, O(logn) such layers. So it should also be done in O(1) * O(logn) = O(logn) time (pardon the abuse of notation). This is so wrong. Each comparison is done in constant time, but there are 2^j/2 such comparisons to be done (2^j pairs of candidates at level j-th).
Thus, the whole task should be completed in O(logn) time. O(n) time.
However, when I try to time it, I get results that clearly reflect a linear O(n) running time.
size = 100000000 max = 0 time = 556
size = 200000000 max = 0 time = 1087
size = 300000000 max = 0 time = 1648
size = 400000000 max = 0 time = 1990
size = 500000000 max = 0 time = 2190
size = 600000000 max = 0 time = 2788
size = 700000000 max = 0 time = 3586
How come?
Here's the code (I left the arrays uninitialized to save on pre-processing time, the method, as far as I'd tested it, accurately identifies the maximum value in unsorted arrays):
public static short findMax(short[] list) {
return findMax(list, 0, list.length);
}
public static short findMax(short[] list, int start, int end) {
if(end - start == 1) {
return list[start];
}
else {
short leftMax = findMax(list, start, start+(end-start)/2);
short rightMax = findMax(list, start+(end-start)/2, end);
return (leftMax <= rightMax) ? (rightMax) : (leftMax);
}
}
public static void main(String[] args) {
for(int j=1; j < 10; j++) {
int size = j*100000000; // 100mil to 900mil
short[] x = new short[size];
long start = System.currentTimeMillis();
int max = findMax(x);
long end = System.currentTimeMillis();
System.out.println("size = " + size + "\t\t\tmax = " + max + "\t\t\t time = " + (end - start));
System.out.println();
}
}
You should count the number of comparisons that actually take place :
In the final step, after you find the maximum of the first n/2 numbers and last n/2 nubmers, you need 1 more comparison to find the maximum of the entire set of numbers.
On the previous step you have to find the maximum of the first and second groups of n/4 numbers and the maximum of the third and fourth groups of n/4 numbers, so you have 2 comparisons.
Finally, at the end of the recursion, you have n/2 groups of 2 numbers, and you have to compare each pair, so you have n/2 comparisons.
When you sum them all you get :
1 + 2 + 4 + ... + n/2 = n-1 = O(n)
You indeed create log(n) layers.
But at the end of the day, you still go through each element of every created bucket. Therefore you go through every element. So overall you are still O(n).
With Eran's answer, you already know what's wrong with your reasoning.
But anyway, there is a theorem called the Master Theorem, which aids in the running time analysis of recursive functions.
It verses on the following equation:
T(n) = a*T(n/b) + O(n^d)
Where T(n) is the running time for a problem of size n.
In your case, the recurrence equation would be T(n) = 2*T(n/2) + O(1) So a=2, b=2, and d=0. That is the case because, for each n-sized instance of your problem, you break it into 2 (a) subproblems, of size n / 2 (b), and combines them in O(1) = O(n^0).
The master theorem simply states three cases:
if a = b^d, then the total running time is O(n^d*log n)
if a < b^d, then the total running time is O(n^d)
if a > b^d, then the total running time is O(n^(log a / log b))
Your case matches the third, so the total running time is O(n^(log 2 / log 2)) = O(n)
It is a nice exercise to try to understand the reason behind these three cases. They are merely the cases for which:
1st) We do the same amount total work for each recursion level (this is the case of mergesort), so we only multiply the merging time, O(n^d), by the number of levels, log n.
2nd) We do less work for the second recursion level than for the first, and so on. Therefore the total work is basically the one for the last merge step (first recursion level), O(n^d).
3rd) We do more work for deeper levels (your case), so the running time is O(number of leaves in the recursion tree). In your case you have n leaves for the deeper recursion level, so O(n).
There are some short videos on a Stanford cousera course which are very nice to explain the Master Method, available https://www.coursera.org/course/algo. I believe you can always preview the course, even if not enrolled.

Complexity of BigO notation

I've been doing some questions but answers not provided so I was wondering if my answers are correct
a) given that a[i....j] is an integer array with n elements and x is an integer
int front, back;
while(i <= j) {
front = (i + j) / 3;
back = 2 * (i + j) / 3;
if(a[front] == x)
return front;
if (a[back] ==x)
return back;
if(x < a[front])
j = front - 1;
else if(x > a[back])
i = back+1;
else {
j = back-1;
i = front + 1;
}
}
My answer would be O(1) but I have a feeling I'm wrong.
B)
public static void whatIs(int n) {
if (n > 0)
System.out.print(n+" "+whatIs(n/2)+" "+whatIs(n/2));
}
ans: I'm not sure whether is it log4n or logn since recursion happens twice.
A) Yes. O(1) is wrong. You are going around the loop a number of times that depends on i, j, x ... and the contents of the array. Work out how many times you go around the loop in the best and worst cases.
B) Simplify log(4*n) using log(a*b) -> log(a) + log(b) (basic high-school mathematics) and then apply the definition of big O.
But that isn't the right answer either. Once again, you should go back to first principles and count the number of times that the method gets called for a given value of the parameter n. And do a proof by induction.
Both answers are incorrect.
In the first example on each iteration either you find the number or you shrink the length of the interval by 1/3. I.e. if the length used to be n you make it (2/3)*n. Worst case you find x on the last iteration - when the length of the interval is 1. So just like with binary search the complexity is calculated via a log: the complexity is O(log3/2(n)) and this in fact is simply O(log(n))
In the second example for a given number n you perform twice the number of operations needed for n/2. Start from n = 0 and n = 1 and use induction to prove the complexity is in fact O(n).
Hope this helps.
A) This algorithm seems similar to the Golden section search. When analyzing complexity, it's sometimes easier to imagine what would happen if we would extend the data structure, rather than contracting it. Think of it like this: Every loop removes a third from the search interval. That means, that if we know exactly how long time it takes for a certain length, we could add 50% more if we're allowed to loop once more – an exponential growth. Thus, the search algorithm must have complexity O(log n).
B) Every time we add a "layer" of function calls, we need to double the number of them (since the function always calls itself twice). In other words, given a certain length and time consumption, doubling n also requires twice as many function calls in the last layer. The algorithm is O(n).

Categories