The while loop is O(logn). Will the inner loop do O(n) work since it will concatenate n characters max (resulting in O(logn + n) in total)? Would using a StringBuilder make it O(1)?
List<String> l = new ArrayList<>();
// some code to add N items to l
//.
//.
//.
while (l.size() > 1) {
int lo = 0, hi = l.size() - 1;
List<String> temp = new ArrayList<>();
while (lo < hi) {
temp.add("(" + l.get(lo) + "," + l.get(hi) + ")");
lo++;
hi--;
}
l = temp;
}
I have come up with multiple solutions the best one outputting a string is a modification of your original post.
This solution would match the algorithmic time of using StringBuilder(l.size * 2 -1). The * 2 - 1 is to allow for the commas.
The following solution reduces the inner loop to O(1), by pre-allocating the array list with the maximum number of elements, each of which can be directly assigned. Your original code required a deep copy of all of the elements of the arraylist, resulting in a time of O(N) to add a new element, beyond initial capacity of the arraylist. The following code initializes the array list to the proper size.
The entire complexity is 1/2 * N + (1/2 * (N - 1) + (1/4 * (N - 2)) + (1/8 * (N -3))...)
My math/analysis may not be correct, however, the lower bound is given that the string concatenation of all elements of an array requires at minimum N operations => O(N)
The upper bound, assuming my math is correct above, is also O(N). Since anything less than half of a value can not added to a value to make the total above the original value.
If my math is wrong the worst case is N log N.
The absolute upper bound is N squared(if you ignore the fact N decreases every iteration).
List<String> l = new ArrayList<>();
// some code to add N items to l
//.
//.
//.
while (l.size() > 1) {
int lo = 0, hi = l.size() - 1;
List<String> temp = new ArrayList<>(l.size/2 + 1);
while (lo < hi) {
temp.add("(" + l.get(lo) + "," + l.get(hi) + ")");
lo++;
hi--;
}
l = temp;
}
In practice, the allocation and de-allocation of memory of 1/2 N ArrayLists for all but small to medium values of N, becomes the limiting factor of the algorithm.
Related
hello i need to apply an algorithm similar to this but the problem is that i need the complexity to be O(logn). the complexity of the code below is said to be O(logn) but from what i understand a recursive method has the growth order of O(n). so the question is what is the growth order of the code below.
public static int findPeak(int[] array, int start, int end) {
int index = start + (end - start) / 2;
if (index - 1 >= 0 && array[index] < array[index - 1]) {
return findPeak(array, start, index - 1);
} else if (index + 1 <= array.length - 1 && array[index] < array[index + 1]) {
return findPeak(array, index + 1, end);
} else {
return array[index];
}
}
It should be O(logn). For simplicity (also an easy way to think) think of this as creating a binary tree. In each function call it divide the input array into two halves (creating nodes in a binary tree). So if the number of input are n then the binary tree has log(n) levels (one level -> one function call).
Also note that in one function call only one recursive call happen( either in if block or else block but not both). This might made you feel it like o(n) growth.
The size of the input array in each branch of the code is half of the original input array into the function. Hence, if T(n) is the time complexity of the function, we can write:
T(n) = T(n/2) + 1
1 shows the comparison in branches, and T(n/2) is for any selected branch. Hence, T(n) is in O(log(n)).
I have the following algorithm:
for(int i=0; i<list.size(); i++){
String cur = list.get(i);
int cnt =0;
while(output.contains(cur)){
cur= cur.substring(0,5) + String.valueOf(cnt);
cnt++;
}
output.add(cur);
}
Not that "list" and "output" are ArrayLists.
I'm thinking that the Time complexity is O(n^2). But what about the while loop that has a output.contains(cur) inside ?
The complexity of this algorithm seems to be depending on the initial contents of output list:
output is empty, while loop is not executed, complexity is O(N), where N is the size of list
list and output are crafted to comply with output.contains(cur) condition
For example,
List<String> list = Arrays.asList("abcde0", "abcde1", "abcde2", "abcde3", "abcde4", "abcde5", "abcde6");
List<String> output = new ArrayList<>(list);
The size of output will be growing thus the number of iterations will be like this:
1) n+1
2) n+1 + n+2 = 2 (n+1) + 1
3) n+1 + n+2 + n+3 = 3 (n+1) + 3
4) n+1 + n+2 + n+3 + n+4 = 4 (n+1) + 6
...
n) n (n+1) + (1 + n-1)*(n-1)/2 = n (n+1) + n (n - 1)/2 = n (3n + 1)/2
Thus, in this (possibly not the worst case) the complexity can be O(N^2).
I have an Algorithm question.
For example, there is an int[] array like [1,5,4,3,7,2].
I want to find the kth largest difference in this array like :
array[i] - array[j] = kth largest difference
(index i must smaller than j and array[i] must larger than array[j]).
The output is return the j in this question.
My current idea:
I can build a int[][] to store all the difference in the array.
Then sorting them and find the kth larget difference.
But time complexity is O(n^2).
Are there better solutions?
You could run 2 separate methods that finds the max and min of the corresponding arrays then find the difference.
OR
You can use your method of creating a new array that finds the difference of every single value THEN find the max of that array and output that.
Then use Array sort() method to reorder an array and print out the values of max differences when called by index+1
Example in Python
results = []
a = [1,5,4,3,7,2]
a_new = [(1,0), (5,1), (4,2), (3,3), (7,4), (2,5)] #(5,1) -> 5:value and 1:index
sort a_new by value # e.g. mergesort O(nlogn)
start_index = 0
end_index = len(a_new) - 1
i = -1
j = -1
diff = 0
while true: # sequential search
if(a_new[start_index][1] < a_new[end_index][1]): #compare i and j
tmp_diff = a_new[end_index][0] - a_new[start_index][0]
i = start_index
j = end_index
diff = tmp_diff
results.append((I,j,diff)) #add a tuple in results_list
end_index -= 1
else: # a_new[start_index][1] > a_new[end_index][1]
start_index += 1
if start_index == end_index: break
sort results by diff and return results[k-1]
I hope this help. I can't check typing error.
My Idea is: max difference is -> max_possible_element_value - min_element_value
Sample:
results = []
a_new = [(1,0), (2,5), (3,3), (4,2), (5,1), (7,4)]
start_index = 0
end_index = len(a_new) - 1
i = -1
j = -1
diff = 0
while True:
if(a_new[start_index][1] < a_new[end_index][1]):
i = a_new[start_index][1]
j = a_new[end_index][1]
diff = a_new[end_index][0] - a_new[start_index][0]
results.append((i, j, diff))
end_index -= 1
else:
start_index -= -1
if start_index == end_index: break
print(results)
Result:
[(0, 4, 6), (0, 1, 4), (0, 2, 3), (0, 3, 2), (0, 5, 1)]
you can sort the result array then get kth diff.
Pseudocode-wise, it could go this way :
You can sort the current array descending, then start your calculation like so :
diffList = {}
calculate(array,k) :
if (k<=0) OR (array.length < 2) OR (k > 2^(array.length-1))
return nothing // whatever behavior you want (exception or null Integer whatever suits you)
else
return kthLargest(k, 0 , array.length-1, array)
end if
kthLargest(count, upperBound, lowerBound, array) :
if count = 0
if upperBound != lowerBound
return max(array[upperBound]-array[lowerBound], max(sortDesc(diffList)))
else
return max(sort(diffList))
end if
else if upperBound = lowerBound
sortDesc(diffList)
return diffList[count]
else
topDiff = array[upperBound]-array[upperBound+1]
botDiff = array[lowerBound-1]-array[lowerbound]
if(topDiff > botDiff)
add botDiff to diffList
return kthLargest(count-1,upperBound,lowerBound-1,array)
else
add topDiff to diffList
return kthLargest(count-1,upperBound+1,lowerBound,array)
end if
end if
Call calculate(array,k) and you're set.
This basically keeps track of a 'discarded pile' of differences while iterating and reducing bounds to always have your final largest difference be the current bounds' difference or a potential better value in that discarded pile.
Both sorts (omitted for brevity) should make this O(n log n).
You can substitute arrays for the most convenient collections, and unwrap this into an iterative solution also.
Corrections appreciated!
It can be done in complexity O( N * logN + N * logValMax ). First lets sort the array. After that we can build a function countSmallerDiffs( x ) which counts how many differences smaller or equal to x are in the array, this function has complexity O( N ) using two pointers. After that we can binary search the result in range minVal-maxVal. We need to find p such that satisfies countSmallerDiffs( p ) <= k < countSmallerDiffs( p + 1 ). The answer will be p.
Hope this helps you out! Good luck!
First of all - yes, I have read several posts here on SO, as well as other places about estimating the complexity of an algorithm.
I have read this, this and this, as well as others
I want to try with an algorithm I wrote that finds the largest rectangle, to see if I understood anything at all from what I've read.
public static long getLargestRectangle(long[] arr, int index, long max) {
int count = 1; //1 step * 1
int i = index - 1; //1 step * 1
int j = index + 1; //1 step * 1
if(index == arr.length-1) return max; //1 step * 2
while(i > -1 && arr[index] <= arr[i]) { //1 step * (N+1)
count++; //1 step * N
i--; //1 step * N
}
while(j <= arr.length-1 && arr[index] <= arr[j]) { //1 step * (N+1)
count++; //1 step * N
j++; //1 step * N
}
max = (max < (arr[index] * count) ? (arr[index] * count) : max); //1 step * 7
return getLargestRectangle(arr, index + 1, max); //1 step * 1
}
//total number of steps: 1 + 1 + 1 + (N + 1) + N + N + (N + 1) + N + N + 7
//=> 6N + 12 = O(N) ?
Am I way off here? I'd love some insight.
EDIT
Like this?
T(n) = O(N) + T(n+1)
T(n) = O(N) + O(N) + T(n+2)
T(n) = O(N) + O(N) + O(N) + T(n+3)
T(n) = i * O(N) + (n+i)
T(n) = n * O(N) + (n+n)
= O(N^2)
If this is wrong, I'd really appreciate if you could update your answer and show me.
Am I way off here? I'd love some insight.
I am afraid so :(
return getLargestRectangle(arr, index + 1, max); //1 step * 1
This above is NOT 1 step, it is a recursive invokation of the method. This method "shrinks" the array by 1 element, so this step actually costs T(n-1), where T(.) is the time complexity of the algorithm.
Combining this with what you already have, you get
T(n) = T(n-1) + O(N)
Solving this recurrence formula will give you the algorithm's complexity.
Note: T(n) = T(n-1) + O(N) is a syntatic sugar, it should actually have been T(n) <= T(n-1) + CONST*N for some constant CONST, since you cannot add a set (O(N)) to a scalar (T(n-1)).
Also note: N!=n. n changes over time, N is the initial length of the array. It is so because your algorithm is actually traversing from n (index) to 0, and from n to N.
This does not change the time complexity in terms of big O notation, however.
I have an algorithm for creating a list of all possible subgraphs on P vertices through a given vertex. It's not perfect
but I think it should be working alright. The problem is I get lost when I try to calculate its time complexity.
I conjured up something like T(p) = 2^d + 2^d * (n * T(p-1) ), where d=Δ(G), p=#vertices required, n=|V|. It's really just a guess.
Can anyone help me with this?
The powerSet() algorithm used should be O(2^d) or O(d*2^d).
private void connectedGraphsOnNVertices(int n, Set<Node> connectedSoFar, Set<Node> neighbours, List<Set<Node>> graphList) {
if (n==1) return;
for (Set<Node> combination : powerSet(neighbours)) {
if (connectedSoFar.size() + combination.size() > n || combination.size() == 0) {
continue;
} else if (connectedSoFar.size() + combination.size() == n) {
Set<Node> newGraph = new HashSet<Node>();
newGraph.addAll(connectedSoFar);
newGraph.addAll(combination);
graphList.add(newGraph);
continue;
}
connectedSoFar.addAll(combination);
for (Node node: combination) {
Set<Node> k = new HashSet<Node>(node.getNeighbours());
connectedGraphsOnNVertices(n, connectedSoFar, k, graphList);
}
connectedSoFar.removeAll(combination);
}
}
It looks like the algorithm has a bug because after the recursive call, it is possible that nodes that appear in combination also appear in connectedSoFar, so the check that connectedSoFar.size() + combination.size() equals n seems incorrect, as it might count a node twice.
Anyway, otherwise to analyze the algorithm, you have 2d elements in the powerset; every operation in the "elase" branch takes O(n) time because connectedSoFar and combination together can't contain more than n nodes. Adding elements to connectedSoFar then takes O(n log n) time because |combination| ≤ n. The iteration over combination nodes happens O(n) times; within it there is O(d) operation to construct the hash set k and then recursive call.
Denote then the complexity of the procedure by X(n) where n is the parameter. You have
X(n) ~ 2d (n + n log n + n (d + X(n - 1)))
because in the recursive call you have added at least one vertex to the graph so in practice the parameter n in the recursive call decreases virtually by at least one.
Simplify this to
X(n) ~ 2d (n (1 + d + log n + X(n - 1)))
because d is constant, mark D = 2d, eliminate the constant 1, and you get
X(n) ~ D n (d + log n + X(n - 1))
which you can analyze as
X(n) ~ (2d)n n! (d + log n)
showing that your algorithm is really a time hog :)