Big Oh-Notation Q - java

I've got a big O notation of O(n^2 + n + n), just want to make sure, this is just O(n^2) correct? Basically I just a for-loop that goes through all of n with another for-loop that does the same inside the first one, then two separate for-loops that also go through all of n.

Generally, in Big-O notation we only consider the fastest growing term of N.
Although if we are comparing two very similar performing algorithms even other terms may matter.
For eg:
If an algorithm has N operation and other has 2N operations although both of them are O(N) but still the later will perform poorer than the former.
Hence in your case your algorithm's Big O notation is O(N^2).

Related

Big O Notation/Time Complexity issue

Would a recursive method call that is recursively called n times (but does not occur in a for loop) and only contains if/else statements within the method be considered O(N) or O(1)? Thanks!
It would be O(N).
The general approach to doing complexity analysis is to look for all significant constant-time executable statements or expressions (like comparisons, arithmetic operations, method calls, assignments) and work out an algebraic formula giving the number of times that they occur. Then reduce that formula to the equivalent big O complexity class.
In your case, method calls are significant.
When you have some experience at doing this, you will be able to leave out statements that "obviously" don't contribute to the overall complexity. But to start with, it is a good exercise to count everything.
Here is an example of factorial code in python. The recursion is pretty much the same as looping, here we are calling factorial function again and again until the if condition satisfies which is same in the loops. so O(N)
def factorial( n ):
if n <1: # base case
return 1
else:
returnNumber = n * factorial( n - 1 ) # recursive call
print(str(n) + '! = ' + str(returnNumber))
return returnNumber
When you calculate time complexity, there are 3 types-
1) Best case
2) Worst case
3) Average case
As, you are wrote O(n) it means you are checking for worst case time complexity, which would be O(n) as, we will take account the block in which recursion is present as the worst case.
If you are looking for best case you can take it as O(1) as it might not get into recursion according to the if-else condition.

Java Big O notation in algorithm

i am comfused with this big O notation problem. This code does not look like O(n) but for loop turns number of word time so it is basically not more than 20
so, if we say Length (line.split()) is constant c, can we say O(c.n) = O(n) ?
while (this.scannerMainText.hasNextLine()){
String line = this.scannerMainText.nextLine();
for (String word : line.split("[.!,\" ]+")) {
some statements
}
}
Yes, the amount of times a loop is run has a negligible impact on the duration, therefore it is usually not mentioned in the Big-O notation.
For example, if a O(n) loop is run 20 times, you would think the notation would be O(20n), but because the impact is so small it is not mentioned it the Big-O notation and therefore O(20n) = O(n). Same goes for O(20n²) = O(n²) etc..
Yes.
Time complexity is based only on the number of times an algorithm is run; the actual number/cost of steps within the algorithm (c, as you call it) is not considered for Big-O notation.
Added
I'm not familiar enough with the edge cases of the theory to say that if all of the lines are equal in length you can reduce the runtime of the inner loop to a constant. In general, this algorithm would be considered O(mn).

Is time complexity of an algorithm calculated only based on number of times loop excecutes?

I have a big doubt in calculating time complexity. Is it calculated based on number of times loop executes? My question stems from the situation below.
I have a class A, which has a String attribute.
class A{
String name;
}
Now, I have a list of class A instances. This list has different names in it. I need to check whether the name "Pavan" exist in any of the objects in the list.
Scenario 1:
Here the for loop executes listA.size times, which can be said as O(n)
public boolean checkName(List<String> listA, String inputName){
for(String a : listA){
if(a.name.equals(inputName)){
return true;
}
}
return false;
}
Scenario 2:
Here the for loop executes listA.size/2 + 1 times.
public boolean checkName(List<String> listA, String inputName){
int length = listA.size/2
length = length%2==0 ? length : length + 1
for(int i=0; i < length ; i++){
if(listA[i].name.equals(inputName) || listA[listA.size - i - 1].name.equals(inputName)){
return true;
}
}
return false;
}
I minimized the number of times for loop executes, but I increased the complexity of the logic.
Can we say this is O(n/2)? If so, can you please explain me?
First note that in Big-O notation there is nothing such as O(n/2) as 1/2 is a constant factor which is ignored in this notation. The complexity would remain as O(n). So by modifying your code you haven't changed anything regarding complexity.
In general estimating the number of times a loop is executed with respect to input size and the operation that actually is associated with a cost in time is the way to get to the complexity class of the algorithm.
The operation that is producing cost in your method is String.equals, which by looking at it's implementation, is producing cost by comparing characters.
In your example the input size is not strictly equal to the size of the list. It also depends on how large the strings contained in that list are and how large the inputName is.
So let's say the largest string in the list is m1 characters and the inputName is m2 characters in length. So for your original checkName method the complexity would be O(n*min(m1,m2)) because of String.equals comparing at most all characters of a string.
For most applications the term min(m1,m2) doesn't matter as either one of the compared strings is stored in a fixed size database column for example and therefore this expression is a constant, which is, as said above, ignored.
No. In big O expression, all constant values are ignored.
We only care about n, such as O(n^2), O(logn).
Time and space complexity is calculated based on the number or operations executed, respectively the number the units of memory used.
Regarding time complexity: all the operations are taken into account and numbered. Because it's hard to compare say O(2*n^2+5*n+3) with O(3*n^2-3*n+1), equivalence classes are used. That means that for very large values of n, the two previous example will have a roughly similar value (more exactly said: they have a similar rate of grouth). Therefor, you reduce the expression to it's most basic form, saying that the two example are in the same equivalence class as O(n^2). Similarly, O(n) and O(n/2) are in the same class and therefor both are in O(n).
Due to what I said before, you can ignore most constant operations (such as .size(), .lenth() on collections, assignments, etc) as they don't really count in the end. Therefor, you're left with loop operations and sometimes complex computations (that somewhere lower on the stack use loops themselves).
To better get an understanding on the 3 classes of complexity, try reading articles on the subject, such as: http://discrete.gr/complexity/
Time complexity is a measure for theoretical time it will take for an operation to be executed.
While normally any improvement in the time required is significant in time complexity we are interested in the order of magnitude. The former means
If an operation for N objects requires N time intervals then it has complexity O(N).
If an operation for N objects requires N/2 it's complexity is still O(N) though.
The above paradox is explained if you get to calculate the operation for large N then there is no big difference in the /2 part as for the N part. If complexity is O(N^2) then O(N) is negligible for large N so that's why we are interested in order of magnitude.
In other words any constant is thrown away when calculating complexity.
As for the question if
Is it calculated based on number of times loop executes ?
well it depends on what a loop contains. But if only basic operation are executed inside a loop then yes. To give an example if you have a loop inside which an eigenanaluysis is executed in each run, which has complexity O(N^3) you cannot say that your complexity is simply O(N).
Complexity of an algorithm is measured based on the response made on the input size in terms of processing time or space requirement. I think you are missing the fact that the notations used to express the complexity are asymptotic notations.
As per your question, you have reduced the loop execution count, but not the linear relation with the input size.

Big O for a finite, fixed size set of possible values

This question triggered some confusion and many comments about whether the algorithms proposed in the various answers are O(1) or O(n).
Let's use a simple example to illustrate the two points of view:
we want to find a long x such that a * x + b = 0, where a and b are known, non-null longs.
An obvious O(1) algo is x = - b / a
A much slower algo would consist in testing every possible long value, which would be about 2^63 times slower on average.
Is the second algorithm O(1) or O(n)?
The arguments presented in the linked questions are:
it is O(n) because in the worst case you need to loop over all possible long values
it is O(1) because its complexity is of the form c x O(1), where c = 2^64 is a constant.
Although I understand the argument to say that it is O(1), it feels counter-intuitive.
ps: I added java as the original question is in Java, but this question is language-agnostic.
The complexity is only relevant if there is a variable N. So, the question makes no sense as is. If the question was:
A much slower algo would consist in testing every possible value in a range of N values, which would be about N times slower on average.
Is the second algorithm O(1) or O(N)?
Then the answer would be: this algorithm is O(N).
Big O describes how an algorithm's performance will scale as the input size n scales. In other words as you run the algorithm across more input data.
In this case the input data is a fixed size so both algorithms are O(1) albeit with different constant factors.
If you took "n" to mean the number of bits in the numbers (i.e. you removed the restriction that it's a 64-bit long), then you could analyze for a given bit size n how do the algorithms scale.
In this scenario, the first would still be O(1) (see Qnan's comment), but the second would now be O(2^n).
I highly recommend watching the early lectures from MIT's "Introduction to Algorithms" course. They are a great explanation of Big O (and Big Omega/Theta) although do assume a good grasp of maths.
Checking every possible input is O(2^N) on the number of bits in the solution. When you make the number of bits constant, then both algorithms are O(1), you know how many solutions you need to check.
Fact: Every algorithm that you actually run on your computer is O(1), because the universe has finite computational power (there are finitely many atoms and finitely many seconds have passed since the Big Bang).
This is true, but not a very useful way to think about things. When we use big-O in practice, we generally assume that the constants involved are small relative to the asymptotic terms, because otherwise only giving the asymptotic term doesn't tell you much about how the algorithm performs. This works great in practice because the constants are usually things like "do I use an array or a hash map" which is at most about a 30x different, and the inputs are 10^6 or 10^9, so the difference between a quadratic and linear algorithm is more important than constant factors. Discussions of big-O that don't respect this convention (like algorithm #2) are pointless.
Whatever the value for a or b are, the worst case is still to check 2^64 or 2^32 or 2^somevalue values. This algorithm complexity is in O(2^k) time where k is the number of bits used to represent a long value, or O(1) time if we consider the values of a and b.

Big O notation For accessing middle element in linked list and binary search?

Article at http://leepoint.net/notes-java/algorithms/big-oh/bigoh.html says that Big O notation For accessing middle element in linked list is O(N) .should not it be O(N/2) .Assume we have 100 elements in linked list So to access the 50th element it has to traverse right from first node to 50th. So number of operation will be around 50.
One more big notation defined for binary search is log(N). Assume we have 1000 elements. As per log(N) we will be needing the opeartion close to 3. Now lets calculate it manually below will be the pattern
500 th element, 250th, 125,63,32,16,8,4,2 so operation are around 9 which is much larger than 3.
Is there any thing i am missing here?
What you're is missing that any constant multiples don't matter for Big O. So we have O(N) = O(N/2).
About the log part of the question, it is actually log2(N) not log10(N), so in this case log(1000) is actually 9 (when rounding down). Also, as before, O(log2(N)) = O(log10(N)), since the two are just constant multiples of each other. Specifically, log2(N) = log10(N) / log10(2)
The last thing to consider is that, if several functions are added together, the function of lower degree doesn't matter with Big O. That's because higher degree functions grow more quickly than functions of lower degree. So we find things like O(N3 + N) = O(N3), and O(eN + N2 + 1) = O(eN).
So that's two things to consider: drop multiples, and drop function of low degree.
O(n) means "proportional to n, for all large n".1
So it doesn't have to be equal to n.
Edit:
My explanation above was a little unclear as to whether something in O(n) is also in O(n2). It is -- my use of the word "proportional" was poor, and as the others were trying to mention (and as I was misunderstanding originally), "converge" would be a better term.2
1: It actually means "proportional to n when n is large in the worst-case scenario", but books usually consider the "average" case, which is more accurately represented by f(n) ~ n, where f is your function.
2: For the more technical people: it should be obvious, but I am not intending to be mathematically rigorous. Math.SE might be a better choice for asking this question, if you're looking for formal proofs (with ratios, limits, and whatnot).
From the web page you linked:
Similarly, constant multipliers are ignored. So a O(4*N) algorithm is
equivalent to O(N), which is how it should be written. Ultimately you
want to pay attention to these multipliers in determining the
performance, but for the first round of analysis using Big-Oh, you
simply ignore constant factors.
The Big O notation is a general expression of an algorithm's efficiency. You should not construe it as an exact equation for how fast an algorithm is going to be, or how much memory it is going to require, but rather view it as an approximation. The constant multipliers of the function are ignored, so for example you could view your example as O 1/2(N), or O k(N), drop the k which gives you O(N).

Categories