Why is this O(N^3) and not O(N^4)? - java

public static int g(LinkedList<Integer> list) {
int t = 0;
for (int i = 0; i < list.size(); i++) {
int tgt = list.get(i);
for (int j = i + 1; j < list.size(); j++) {
if (list.get(j) == tgt)
t += list.get(j);
}
}
return t;
}
Shouldn't the if statement make the complexity O(N^4)?

The if statements has nothing to do with the time complexity - the operation of an if-statement has time complexity O(1), but the work in an if-statement can have a greater time complexity.
It's because list.get(i) is of O(n). To get the n:th element of a LinkedList you need to step n times in the list to find it. This is because a LinkedList doesn't save its indexes, only the first and last element in the list, and then its direct neighbours.
This is the time complexity for each of your functions:
public static int g(LinkedList<Integer> list) {
int t = 0;
for (int i = 0; i < list.size(); i++) { // O(n) - loop
int tgt = list.get(i); // O(n)
for (int j = i + 1; j < list.size(); j++) { // O(n) - loop
if (list.get(j) == tgt) // O(n)
t += list.get(j); // O(n)
}
}
return t;
}
Since you're only iterating over 2 loops, it will initially make the time complexity O(n^2). The 3 calls of list.get(i) will make each make a time complexity of O(n), thus resulting in 3*O(n). This is however defaulted to O(n), and making the final time complexity to O(n) * O(n) * 3*O(n) => O(n^3)
Extra
On an unrelated note: You see that when you call list.get(j) twice, in the innermost loop, you will cause the program to iterate over the list twice, even though you just got the value.
if (list.get(j) == tgt)
t += list.get(j);
Chances are that the processor or compiler will optimize this and save the value in the cache, but it's still a good idea to call list.get(j) once and store its value.

If statements are not loops.
Each get may take O(n), but the statements in its body are executed after the condition. So the if statement takes O(n)+O(n) (for the two gets), which is O(n).
That is nested inside two nested loops over a list of size n, so overall it is O(n^3).

Related

Big-O Notation of a for-loop with an conditional inner loop

I am trying to understand the time complexity of this program, and why. I've made some notes of what I think it is, but I'm unsure if I understood it correct.
public static int countSteps(int n) {
int pow = 2; // O(1)
int steps = 0; // O(1)
for (int i = 0; i < n; i++) { // O(n)
if (i == pow) { // O(1)
pow *= 2; // O(1)
for (int j = 0; j < n; j++) { // O(n)
steps++; // O(1)
}
}
else {
steps++; // O(1)
}
}
return steps; // O(1)
}
The inner loop spends a lot of time iterating through n every time the if-statement is triggered, does that affect the time complexity or is it still constant?
The outer loop means it's at least O(n). The inner loop will run only when i is a power of two, so it's O(n*log n).

Not understanding big O notation O(∑ i=0 n−1 (∑ j=i+1 n (j−i)))=O(∑ i=0 n−1 2 (1+n−i)(n−i))=O(n^3)

Working on the following problem:
Given a string s, find the length of the longest substring without repeating characters.
I'm using this brute force solution:
public class Solution {
public int lengthOfLongestSubstring(String s) {
int n = s.length();
int res = 0;
for (int i = 0; i < n; i++) {
for (int j = i; j < n; j++) {
if (checkRepetition(s, i, j)) {
res = Math.max(res, j - i + 1);
}
}
}
return res;
}
private boolean checkRepetition(String s, int start, int end) {
int[] chars = new int[128];
for (int i = start; i <= end; i++) {
char c = s.charAt(i);
chars[c]++;
if (chars[c] > 1) {
return false;
}
}
return true;
}
}
Tbe big O notation is as follows:
I understand that three nested iterations would result in a time complexity O(n^3).
I only see two sigma operators being used on the start of the formula, could someone enlighten me on where the third iteration comes to play in the beginning of the formula?
The first sum from i=0 to n-1 corresponds to the outer for loop of lengthOfLongestSubstring, which you can see iterates from i=0 to n-1.
The second sum from j = i+1 to n corresponds to the second for loop (you could be starting j at i+1 rather than i as there's no need to check length 0 sub-strings).
Generally, we would expect this particular double for loop structure to produce O(n^2) algorithms and a third for loop (from k=j+1 to n) to lead to O(n^3) ones. However, this general rule (k for loops iterating through all k-tuples of indices producing O(n^k) algorithms) is only the case when the work done inside the innermost for loop is constant. This is because having k for loops structured in this way produces O(n^k) total iterations, but you need to multiply the total number of iterations by the work done in each iteration to get the overall complexity.
From this idea, we can see that the reason lengthOfLongestSubstring is O(n^3) is because the work done inside of the body of the second for loop is not constant, but rather is O(n). checkRepitition(s, i, j) iterates from i to j, taking j-i time (hence the expression inside the second term of the sum). O(j-i) time is O(n) time in the worst case because i could be as low as 0, j as high as n, and of course O(n-0) = O(n) (it's not too hard to show that checkRepitions is O(n) in the average case as well).
As mentioned by a commenter, having a linear operation inside the body of your second for loop has the same practical effect in terms of complexity as having a third for loop, which would probably be easier to see as being O(n^3) (you could even imagine the function definition for checkRepitition, including its for loop, being pasted into lengthOfLongestSubstring in place to see the same result). But the basic idea is that doing O(n) work for each of the O(n^2) iterations of the 2 for loops means the total complexity is O(n)*O(n^2) = O(n^3).

Complexity for a limited for loop iterations

I have a quick question about Complexity. I have this code in Java:
pairs is a HashMap that contains an Integer as a key, and it's frequency in a Collection<Integer> as a value. So :
pairs = new Hashmap<Integer number, Integer numberFrequency>()
Then I want to find the matching Pairs (a,b) that verify a + b == targetSum.
for (int i = 0; i < pairs.getCapacity(); i++) { // Complexity : O(n)
if (pairs.containsKey(targetSum - i) && targetSum - i == i) {
for (int j = 1; j < pairs.get(targetSum - i); j++) {
collection.add(new MatchingPair(targetSum - i, i));
}
}
}
I know that the complexity of the first For loop is O(n), but the second for Loop it only loops a small amount of times, which is the frequency of the number-1, do we still count it as O(n) so this whole portion of code will be O(n^2) ? If it is does someone have any alternative to just make it O(n) ?
Its O(n) if 'pairs.getCapacity()' or 'pairs.get(targetSum - i)' is a constant you know before hand. Else, two loops, one nested in the other, is generally O(n^2).
You can consider that for the wors case your complexity is O(n2)

Worst Case Big O with Java Algorithms

1.
for(i = 0; i < 3; i++){
for(j = 0; j < 10; j++){
print i+j;
}
}
I would assume Big O would be 30 since the most amount of times would be 3*10.
2.
for(i = 0; i < n; i++){
for(j = 0; j < m; j++){
print i+j;
}
}
Would be O be n*m?
3.
for(i = 0; i < n; i++){
for(j = 0; j < m; j++){
for(int k = 1; k < 1000; k *= 2){
print i+j+k;
}
}
}
n * m * log base 2 (1000) The Big O is in nlog(n) time
4.
for(i = 0; i < n - 10; i++){
for(j = 0; j < m/2; j++){
print i+j;
}
}
5.
for(i = 0; i < n; i++){
print i;
}
//n and m are some integers
for(j = 1; j < m; j *= 2){
print j;
}
Can someone give me a hand with this if you know Big O. I am looking at these and at a loss. I hope I am posting this in the right location, I find these problems difficult. I appreciate any help.
I think it's important just to point out that Big O notation is all about functions that, given an arbitrary constant, will be considered upper bounds at some point.
O(1)
This is because each loop iterates in a constant amount of time. We would refer to this as O(1) instead of O(30) because the function which is the upper bound is 1 with an arbitrary constant >=30.
O(n*m)
Simply because we have to loop through m iterations n times.
O(n*m)
This is the same as the previous one, only we're throwing in another loop in the middle. Now you can notice that this loop, similar to the first problem, is just a constant time. Therefore, you don't even need to really spend time figuring out how often it loops since it will always be constant - it is O(1) and would be interpreted as O(n*m*1) which we can simply call O(n*m)
O(n*m)
For the outer loop, don't get caught up on the .. - 10 and realize that we can just say that loop runs in O(n). We can ignore that .. - 10 for the same reason we ignored the exact values in the first problem; constants don't really matter. This same principle applies for the m/2 because you can think of m just being manipulated by a constant of 1/2. So we can just call this O(n*m).
T(n) = O(n) + O(lg m) => O(n + lg m)
So there are two components we have to look at here; the first loop and the second loop. The first loop is clearly O(n), so that's no problem. Now the second loop is a little tricky. Basically, you can notice that the iterator j is growing exponentially (notably power of 2's), therefore that loop will be running the inverse of exponentially (logarithmic). So this function runs in O(n + lg m).
Any constant factor can be ignored. O(30) is equal to O(1), which is what one would typically say for 1).
2) Just so.
3) in O(n*m*log_2(1000)), log_2(1000) is constant, so it's O(n*m).
4) O(n-10) is same as O(n). O(m/2) is same as O(m). Thus, O(n*m) again.
5) Trivially O(n).
6) O(log_2(m)).

Run time of algorithm

I am trying to figure out the run time of the following algorithm.
I argue it is O(n) because the inner loop does not depend on the outer loop.
So we could have O(n) + O(n) = O(2n) which equals O(n)
Is this correct? I'm not sure my logic is correct and I cannot figure out how to analyze is correctly.
The algorithm is finding the largest elements to the left of a list of elements.
Thanks!
public static void main(String[] args){
int[] a = {4,3,2,10,4,8,9,1};
int[] p = new int[a.length];
ArrayDeque<Integer> previousIndex = new ArrayDeque<Integer>();
for(int i = 0; i < a.length ; i++){
while (!previousIndex.isEmpty() && a[previousIndex.peek()] <= a[i]){
previousIndex.pop();
}
if (previousIndex.isEmpty()) {
p[i] = 0;
} else {
p[i] = previousIndex.peek();
}
previousIndex.push(i);
}
for(int i = 0; i < p.length ; i++){
System.out.println(p[i]);
}
}
}
This is O(N) for though you have a loop within a loop, the total number of times the inner loop will be executed can never be more than the total number of times that
previousIndex.push(i);
is called, which is a.length (or N)
To work out the order really you are looking at the worst case. You are correct that the nested loop is the cause for concern here:
for(int i = 0; i < a.length ; i++){
This is immediately order N
while (!previousIndex.isEmpty() && a[previousIndex.peek()] <= a[i]){
This could potentially also go nearly N times.
So the final order is N*N or N^2
You do have to keep in mind the usual case though. If it is likely that the while loop will in fact exit after only a couple of iterations you could get back down to O(N).
In fact, you are fine and have an O(N) algorithm - but it's harder to prove than most. This assumes, though, that .isEmpty(), .peek() etc. on the ArrayDeque are all constant-time operations. Consult the documentation to be sure.
The key is that your processing of the deque in the inner loop is destructive:
while (!previousIndex.isEmpty() && a[previousIndex.peek()] <= a[i]){
previousIndex.pop();
}
This removes an element from previousIndex each time, and can only run when there is one to remove. Therefore, the total number of times the while loop could run, across all indices, is the number of times that something is .pushed into deque. And since this only happens at one point - at the end of the first for loop - we can see that the number of items pushed is O(N).

Categories