I have an algorithm for creating a list of all possible subgraphs on P vertices through a given vertex. It's not perfect
but I think it should be working alright. The problem is I get lost when I try to calculate its time complexity.
I conjured up something like T(p) = 2^d + 2^d * (n * T(p-1) ), where d=Δ(G), p=#vertices required, n=|V|. It's really just a guess.
Can anyone help me with this?
The powerSet() algorithm used should be O(2^d) or O(d*2^d).
private void connectedGraphsOnNVertices(int n, Set<Node> connectedSoFar, Set<Node> neighbours, List<Set<Node>> graphList) {
if (n==1) return;
for (Set<Node> combination : powerSet(neighbours)) {
if (connectedSoFar.size() + combination.size() > n || combination.size() == 0) {
continue;
} else if (connectedSoFar.size() + combination.size() == n) {
Set<Node> newGraph = new HashSet<Node>();
newGraph.addAll(connectedSoFar);
newGraph.addAll(combination);
graphList.add(newGraph);
continue;
}
connectedSoFar.addAll(combination);
for (Node node: combination) {
Set<Node> k = new HashSet<Node>(node.getNeighbours());
connectedGraphsOnNVertices(n, connectedSoFar, k, graphList);
}
connectedSoFar.removeAll(combination);
}
}
It looks like the algorithm has a bug because after the recursive call, it is possible that nodes that appear in combination also appear in connectedSoFar, so the check that connectedSoFar.size() + combination.size() equals n seems incorrect, as it might count a node twice.
Anyway, otherwise to analyze the algorithm, you have 2d elements in the powerset; every operation in the "elase" branch takes O(n) time because connectedSoFar and combination together can't contain more than n nodes. Adding elements to connectedSoFar then takes O(n log n) time because |combination| ≤ n. The iteration over combination nodes happens O(n) times; within it there is O(d) operation to construct the hash set k and then recursive call.
Denote then the complexity of the procedure by X(n) where n is the parameter. You have
X(n) ~ 2d (n + n log n + n (d + X(n - 1)))
because in the recursive call you have added at least one vertex to the graph so in practice the parameter n in the recursive call decreases virtually by at least one.
Simplify this to
X(n) ~ 2d (n (1 + d + log n + X(n - 1)))
because d is constant, mark D = 2d, eliminate the constant 1, and you get
X(n) ~ D n (d + log n + X(n - 1))
which you can analyze as
X(n) ~ (2d)n n! (d + log n)
showing that your algorithm is really a time hog :)
Related
double expRecursive(double x, int n) {
if (n <= 4) {
return expIterativ(x, n);
}
return expRecursive(x, n/2) *
expRecursive(x, (n + 1)/2);
}
So the problem I am dealing with is how I can write the time complexity of this method using a big O notation.
Here is what I've done so far. I'm not sure if it is correct but let me explain it. I get that T(n)= 2 T(n/2) + 1 for n>4 since we have 2 recursive calls and other operations. But when it comes to n<=4, that is where I got stuck. There is a recursive call which means that even it will be something like T(n)= T(n/2)+1. But this doesn't even feel right, I would really appreciate it if someone could help me.
Assuming a constant x for our purposes (i.e., we are not interested in the growth rate as a function of x), expIterative is also just a function of n, and is only called for cases where n <= 4. There is some largest time t* that expIterative takes to run on x and n where n goes from 0 to 4. We can simply use that largest time t* as a constant, since the range of n that can be sent as an input is bounded.
double expRecursive(double x, int n) {
if (n <= 4) { // a+b
return expIterativ(x, n); // c+t*
}
return expRecursive(x, n/2) * // c+T(n/2)
expRecursive(x, (n + 1)/2); // d+T((n+1)/2)
}
As you pointed out, we can make the simplifying assumption that n is even and just worry about that case. If we assume n is a power of 2, even easier, since then all recursive calls will be for even numbers.
We get
T(n) <= 2T(n/2) + (a+b+2c+d+t*)
The stuff in parentheses at the end is just a sum of constants, so we can add them together and call the result k:
T(n) <= 2T(n/2) + k
We can write out some terms here:
n T(n)
4 t*
8 2t* + k
16 4t* + 2k + k
32 8t* + 4k + 2k + k
...
2^n 2^(n-2)t* + 2^(n-2)k - k
= (2^n)(t* + k)/4 - k
So for an input 2^n, it takes time proportional to 2^n. That means that T(n) = O(n).
hello i need to apply an algorithm similar to this but the problem is that i need the complexity to be O(logn). the complexity of the code below is said to be O(logn) but from what i understand a recursive method has the growth order of O(n). so the question is what is the growth order of the code below.
public static int findPeak(int[] array, int start, int end) {
int index = start + (end - start) / 2;
if (index - 1 >= 0 && array[index] < array[index - 1]) {
return findPeak(array, start, index - 1);
} else if (index + 1 <= array.length - 1 && array[index] < array[index + 1]) {
return findPeak(array, index + 1, end);
} else {
return array[index];
}
}
It should be O(logn). For simplicity (also an easy way to think) think of this as creating a binary tree. In each function call it divide the input array into two halves (creating nodes in a binary tree). So if the number of input are n then the binary tree has log(n) levels (one level -> one function call).
Also note that in one function call only one recursive call happen( either in if block or else block but not both). This might made you feel it like o(n) growth.
The size of the input array in each branch of the code is half of the original input array into the function. Hence, if T(n) is the time complexity of the function, we can write:
T(n) = T(n/2) + 1
1 shows the comparison in branches, and T(n/2) is for any selected branch. Hence, T(n) is in O(log(n)).
The while loop is O(logn). Will the inner loop do O(n) work since it will concatenate n characters max (resulting in O(logn + n) in total)? Would using a StringBuilder make it O(1)?
List<String> l = new ArrayList<>();
// some code to add N items to l
//.
//.
//.
while (l.size() > 1) {
int lo = 0, hi = l.size() - 1;
List<String> temp = new ArrayList<>();
while (lo < hi) {
temp.add("(" + l.get(lo) + "," + l.get(hi) + ")");
lo++;
hi--;
}
l = temp;
}
I have come up with multiple solutions the best one outputting a string is a modification of your original post.
This solution would match the algorithmic time of using StringBuilder(l.size * 2 -1). The * 2 - 1 is to allow for the commas.
The following solution reduces the inner loop to O(1), by pre-allocating the array list with the maximum number of elements, each of which can be directly assigned. Your original code required a deep copy of all of the elements of the arraylist, resulting in a time of O(N) to add a new element, beyond initial capacity of the arraylist. The following code initializes the array list to the proper size.
The entire complexity is 1/2 * N + (1/2 * (N - 1) + (1/4 * (N - 2)) + (1/8 * (N -3))...)
My math/analysis may not be correct, however, the lower bound is given that the string concatenation of all elements of an array requires at minimum N operations => O(N)
The upper bound, assuming my math is correct above, is also O(N). Since anything less than half of a value can not added to a value to make the total above the original value.
If my math is wrong the worst case is N log N.
The absolute upper bound is N squared(if you ignore the fact N decreases every iteration).
List<String> l = new ArrayList<>();
// some code to add N items to l
//.
//.
//.
while (l.size() > 1) {
int lo = 0, hi = l.size() - 1;
List<String> temp = new ArrayList<>(l.size/2 + 1);
while (lo < hi) {
temp.add("(" + l.get(lo) + "," + l.get(hi) + ")");
lo++;
hi--;
}
l = temp;
}
In practice, the allocation and de-allocation of memory of 1/2 N ArrayLists for all but small to medium values of N, becomes the limiting factor of the algorithm.
I have the following:
public static int[] MyAlgorithm(int[] A, int n) {
boolean done = true;
int j = 0;
while(j <= n - 2) {
if(A[j] > A[j+1]) {
int temp = A[j + 1];
A[j + 1] = A[j];
A[j] = temp;
done = false;
}
j++;
}
j = n - 1;
while(j >= 1) {
if(A[j] < A[j-1]) {
int temp = A[j - 1];
A[j - 1] = A[j];
A[j] = temp;
done = false;
}
j--;
}
if(!done)
return MyAlgorithm(A, n);
else
return A;
}
This essentially sorts an array 'A' of length 'n'. However, after trying to figure out what the time complexity of this algorithm, I keep running into circles. If I take a look at the first while-loop, the content in the loop will get execute 'n-2' times, thus making it O(n). The second while-loop executes in 'n-1' times, thus making it O(n), provided we've dropped the constants for both functions. Now, the recursive portion of this algorithm is what throws me off again.
The recursion looks to be tail-recursive given that it doesn't call anything else afterwards. At this point, I'm not sure if the the recursion being tail-recursive has anything to do with this time complexity... If this is really an O(n) does this necessarily mean that it's Omega(n) as well?
Please correct any of my assumptions I've made if there are any. Any hints would be great!
This is O(n2).
This is because with each recursion, you iterate the entire array twice. Once up (bubbling the highest answer to the top) and once down (bubbling the lowest answer to the bottom).
On the next iteration, you have yet another 2n. However, we know the topmost and bottommost elements are correct. Because of this we know we have n-2 unsorted elements. When it repeats, you will sort 2 more elements, and so on. if we want to find the number of iterations, "i", then we solve for n - 2i = 0. i = n/2 iterations.
n/2 iterations * 2n operations per iteration = n2 operations.
EDIT: tail recursion doesn't really help with time order, but it DOES help some languages with memory processing. I can't say exactly how it works, but it significantly reduces the stack space required somehow.
Also, I'm a bit rusty on this, but O notation denotes WORST case, whereas Omega notation denotes BEST case. This is Omega(n), because the best case is that it iterates the array twice, finds everything is sorted, and doesn't recurse.
In your case the recurrence relation is something like:
T(n) = T(n) + n.
But if we assume the biggest no.
will end up at the end in every case. We can approximate:
T(n) = T(n-1) +n
T(n-1) = T(n-2) + n-1
t(n-2) = T(n-3) + n-2
T(n) = T(n-2) + n-1 +n
T(n) = T(n-3) + n-2 + n-1 + n
T(n) = T(n-k) + kn -k(k-1)/2
If n-k = 1
then k = n+1
substitutuing that
T(n) = T(1) + n(n+1)-(n+1)(n)/2
order O(n^2)
Also smallest no. will end up at the begining so we could also have approximated.
T(n) = T(n-2) +n
Still order would be O(n^2)
If that approximation is removed we can't estimate exactly when
done will be true. But in this case we can be sure that the biggest no
will always end up at the end after each iteration and the smallest at the beginning so nothing would be done for 0 and n-1.
I hope this helps you understand why n^2.
Okay, I know Mergesort has a worst case time of theta(NlogN) but its overhead is high and manifests near the bottom of the recursion tree where the merges are made. Someone proposed that we stop the recursion once the size reaches K and switch to insertion sort at that point. I need to prove that the running time of this modified recurrence relation is theta(NK + Nlog(N/k))? I am blanking as to how to approach this problem..
Maybe a good start is to look at the recurrence relation for this problem. I imagine for typical mergesort it would look something like this:
T(N) = 2 * T(N / 2) + N
i.e. you are dividing the problem into 2 subproblems of half the size, and then performing N work (the merge). We have a base case that takes constant time.
Modelling this as a tree we have:
T(N) = N -> T(N / 2)
-> T(N / 2)
= N -> (N / 2) -> T(N / 4)
-> T(N / 4)
-> (N / 2) -> T(N / 4)
-> T(N / 4)
This gives an expansion of
T(N) = N + 2N/2 + 4N/4 + ...
= N + N + N ...
So really we just need to see how deep it goes. We know that the ith level operates on subproblems N / 2^i in size. So our leaf nodes (T(1)) occur on level L where N / 2^L = 1:
N / 2^L = 1
N = 2^L
log N = log(2^L)
log N = L
So our runtime is N log N.
Now we introduce insertion sort. Our tree will look something like this
T(N) = ... -> I(K)
-> I(K)
...x N/K
In other words, we will at some level L have to solve N/K insertion sort problems of size K. Insertion sort has a worst-case runtime of K^2. So at the leaves we have this much work in total:
(N / K) * I(K)
= (N / K) * K * K
= N * K
But we also have a bunch of merging to do as well, at a cost of N per level of the tree as explained before. Going back to our previous method, let's find L (the number of levels before we reach subproblems of size K and thus switch to insertion):
N / 2^L = K
N / K = 2^L
L = log (N/K)
So in total we have
O(N) = N * K + N * log (N/K)
It's been too long since I took algorithms to give you a proof sketch, but that should get your neurons firing.