This question already has answers here:
What is a plain English explanation of "Big O" notation?
(43 answers)
Closed 4 years ago.
Why do we just take the highest degree of polynomial for Big Oh notation. I understand that we can drop the constants as they won't matter for a very high value of 'n'.
But, say an algorithm takes (nlogn + n) time, then why do we ignore 'n' in this case. And the big Oh comes out to be O(nlogn).
Big Oh has to be the upper bound of time taken by the algorithm. So, shouldn't it be (nlogn + n), even for very high values of n?
Because O is asymptotic comparison which answers the question how the function compare for large n. Lower degrees of polynomial become insignificant for function behavior once n is sufficiently large.
One way to see that is: "nlog(n) + n" is smaller than "2nlog(n)". Now you can drop the 2.
Related
This question already has answers here:
What is a plain English explanation of "Big O" notation?
(43 answers)
Big-O for Eight Year Olds? [duplicate]
(25 answers)
What does O(log n) mean exactly?
(32 answers)
Big O, how do you calculate/approximate it?
(24 answers)
How can I find the time complexity of an algorithm?
(10 answers)
Closed 3 years ago.
If an algorithm executes a statement, it is n/2 times, then how come O is equal to O(n). Because the video explains that it is because of the degree of a polynomial. Please explain.
for(int i =0;i<n;i=i+2){
sout(n) ---- This statemet can be print n/2 times
}
f(n) = n/2 then O(n)
In simple words, although the statement will be printed n/2 times, it still holds a linear relationship with n.
For n=10, it will print 5 times.
For n=50, it will print 25 times.
For n=100, it will print 50 times.
Notice the linear relationship. The factor 1/2 is just multiplied by n. It is a linear relationship and O(n) signifies a linear relation and doesn't care of the constant (which is 1/2 in this case). Even f(n) = n/3 would have been O(n).
Yes, as Aoerz already said, to understand your problem, you should understand what the O notation means.
In a math way:
O(f(n)) = {g(n) : ∃c>0 ∧ n0 ≥ 0 | g(n) ≤ c*f(n) ∀ n ≥ n0}
so g(n) ∈ O(f(n)) if g(n) ≤ c*f(n) (after a certain n0 and a constant c)
To put it in a easy way, think of n as a really big number. How much all the other factors matter? So what's the only main factor that really matter?
Example:
f(n) = n^3 + 300*n +5 --> f(n) ∈ O(n^3) (try it with n=100 and you'll see that is already enough)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
What I need is to an explanation on how to determine it, here are few examples and hope you can help me find their complexity using Big-O Notation:
For each of the following, find the dominant term(s) having the sharpest increase in n and give the time complexity using Big-O notation.
Consider that we always have n>m.
Expression Dominant term(s) O(…)
5+ 0.01n^3 + 25m^3
500n +100n^1.5 + 50nlogn
0.3n+ 5n^1.5 +2.5n^1.75
n^2logn +n(log2m)^2
mlog3n +nlog2n
50n+5^3 m + 0.01n^2
It's fairly simple.
As n rises to large numbers (towards infinity), some parts of the expression becomes meaningless, so remove them.
Also, O() notation is relativistic, not absolute, meaning there is no scale, so constant factors are meaningless, so remove them.
Example: 100 + 2*n. At low numbers 100 is the main contributor to the result, but as n increases, it becomes meaningless. Since there is no scale, n and 2n is the same thing, i.e. a linear curve, so the result is O(n).
Or said another way, you choose the most extreme curve in the expression from this graph:
(source: bigocheatsheet.com)
Let's take your second example: 500n +100n^1.5 + 50nlogn
1st part is O(n).
2nd part is O(n^1.5).
3rd part is O(nlogn).
Fastest rising curve is O(n^1.5), so that is the answer.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Need help in mod 1000000007 questions
I have a set of numbers for which i want to compute total product modulo 1000007. e.g if my array contains 1000 numbers then i need to compute the following.
int product = 1;
for(int i=0;i<Array_Max;i++)
product = product * Array[i]
then product modulo 1000007 = ?
Is there any algorithm to optimize the above pseudo code ? Right now i am unable to store the product because of overflow.
Any suggestion appreciated.
Use longs for the product - this will be enough to hold the result of multiplying two integers.
You also will want to compute the modulo on each iteration so that you avoid overflow (this is safe to do from a mathematical perspective, as it doesn't change the result modulo 1000007 at the end).
You could also use BigIntegers if you wanted, but it would be much slower.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
The most efficient way to implement an integer based power function pow(int, int)
I know this question is pretty easy but my requirement is that I want to compute x to power x where x is a very large number in the best optimize way possible. I am not a math geek and therefore need some help to figure out the best way possible.
In java, we can use BigInteger but how to optimize the code? Any specific approach for the optimization?
Also using recursion will the large value of x, makes the code slow and prone to stack overflow error?
Eg: 457474575 raise to power 457474575
You do realize that the answer to your example is going to be a very very large number, even for a BigInteger? It will have 3961897696 digits!
The best way to work with really large numbers, if you don't need exact precision, is to work with their logarithms instead. To take x to the x power, take the log of x and multiply it by x. If you need to convert it back take e to the x exp(x), except in this case it will almost certainly overflow.
This is one of the simplest optimized approaches:
x^1*x^1=x^2
x^2*x^2=x^4
etc...
x^x = x^(x/2)*x^(x/2)*remainder
((BigInteger) 457474575).pow(457474575);
This question already has answers here:
Computational complexity of Fibonacci Sequence
(12 answers)
Closed 6 years ago.
So, i've got a recursive method in Java for getting the 'n'th fibonacci number - The only question i have, is: what's the time complexity? I think it's O(2^n), but i may be mistaken? (I know that iterative is way better, but it's an exercise)
public int fibonacciRecursive(int n)
{
if(n == 1 || n == 2) return 1;
else return fibonacciRecursive(n-2) + fibonacciRecursive(n-1);
}
Your recursive code has exponential runtime. But I don't think the base is 2, but probably the golden ratio (about 1.62). But of course O(1.62^n) is automatically O(2^n) too.
The runtime can be calculated recursively:
t(1)=1
t(2)=1
t(n)=t(n-1)+t(n-2)+1
This is very similar to the recursive definition of the fibonacci numbers themselves. The +1 in the recursive equation is probably irrelevant for large n. S I believe that it grows approximately as fast as the fibo numbers, and those grow exponentially with the golden ratio as base.
You can speed it up using memoization, i.e. caching already calculated results. Then it has O(n) runtime just like the iterative version.
Your iterative code has a runtime of O(n)
You have a simple loop with O(n) steps and constant time for each iteration.
You can use this
to calculate Fn in O(log n)
Each function call does exactly one addition, or returns 1. The base cases only return the value one, so the total number of additions is fib(n)-1. The total number of function calls is therefore 2*fib(n)-1, so the time complexity is Θ(fib(N)) = Θ(phi^N), which is bounded by O(2^N).
O(2^n)? I see only O(n) here.
I wonder why you'd continue to calculate and re-calculate these? Wouldn't caching the ones you have be a good idea, as long as the memory requirements didn't become too odious?
Since they aren't changing, I'd generate a table and do lookups if speed mattered to me.
It's easy to see (and to prove by induction) that the total number of calls to fibonacciRecursive is exactly equal to the final value returned. That is indeed exponential in the input number.