What is degree of a polynomial f(n) = n/20 [duplicate] - java

This question already has answers here:
What is a plain English explanation of "Big O" notation?
(43 answers)
Big-O for Eight Year Olds? [duplicate]
(25 answers)
What does O(log n) mean exactly?
(32 answers)
Big O, how do you calculate/approximate it?
(24 answers)
How can I find the time complexity of an algorithm?
(10 answers)
Closed 3 years ago.
If an algorithm executes a statement, it is n/2 times, then how come O is equal to O(n). Because the video explains that it is because of the degree of a polynomial. Please explain.
for(int i =0;i<n;i=i+2){
sout(n) ---- This statemet can be print n/2 times
}
f(n) = n/2 then O(n)

In simple words, although the statement will be printed n/2 times, it still holds a linear relationship with n.
For n=10, it will print 5 times.
For n=50, it will print 25 times.
For n=100, it will print 50 times.
Notice the linear relationship. The factor 1/2 is just multiplied by n. It is a linear relationship and O(n) signifies a linear relation and doesn't care of the constant (which is 1/2 in this case). Even f(n) = n/3 would have been O(n).

Yes, as Aoerz already said, to understand your problem, you should understand what the O notation means.
In a math way:
O(f(n)) = {g(n) : ∃c>0 ∧ n0 ≥ 0 | g(n) ≤ c*f(n) ∀ n ≥ n0}
so g(n) ∈ O(f(n)) if g(n) ≤ c*f(n) (after a certain n0 and a constant c)
To put it in a easy way, think of n as a really big number. How much all the other factors matter? So what's the only main factor that really matter?
Example:
f(n) = n^3 + 300*n +5 --> f(n) ∈ O(n^3) (try it with n=100 and you'll see that is already enough)

Related

Lowest k pairwise absolute differences [duplicate]

This question already has answers here:
Find pairs with least difference
(3 answers)
Closed 6 months ago.
Given a list of Integers and a number k. We have to return the k minimum absolute differences between different pairs of Integers in sorted order (ascending)
eg: If the Given list of integers is 6, 9, 1, and k=2 Then the output should be [3,5]
Because the pairwise absolute differences are: |6-9|=3, |6-1|=5, |9-1|=8 the lowest 2 in ascending order will be 3,5
I solved this problem in following ways:
Calculate the pairwise absolute difference-> sort the list -> return the first k elements
Score: 7/15 Only 7 test cases passed out of 15. Rest I got Time Limit Exceeded Error
Instead of sorting I put all the elements in a min heap using the PriorityQueue in Java. The results were similar 8/15
Not sure what could be more efficient way to approach this problems. Any ideas?
Sort the list first
Init d = 1
Then get the absolute difference between elements 'd' distance apart and insert into a min-k heap. Quit as soon as your heap got k elements.
Go to step 3 with d = d + 1
Answer is in your heap. Complexity depends on k. If k ~ n^2, then it could be O(n^2) because you have to find all pairs. But it could be much better if k << n^2.

What is the big-o notation for this algorithm? [duplicate]

This question already has answers here:
Computing Time T(n) and Big-O with an infinite loop
(3 answers)
Closed 3 years ago.
What will be the big-o notation for the algorithm that consist of multiplication of N in the loop.
void testing(int n) {
for(int i =0; i<n;i++) {
n=n*2;
System.out.println("hi"+n);
}
}
I'll try to be as rigorous as possible for my answer.
EDIT : forgot to say, we assume every operation like comparison, assignment and multiplication have complexity of O(1)
In short, this algorithm does not terminate in most of the cases, so complexity is not defined for it.
Complexity is some kind of a upper bound for the cost C of an algorithm, stating O(n) complexity means C <= k x n, k > 0. Non terminating algorithm has a cost which is infinite, and inf > inf is undefined.
Then, let's look at why your algorithm is non-terminating :
each iteration, we continue if i < n. Yet, each iteration n is multiplied by 2. We can see a relation between the value of i and n when checking for the condition of the loop : n = n0x2^i, with n0 being the initial value of n.
Therefore, your algorithm will only be terminating when n0 <= 0, and when this case occurs, it will not enter the loop once.
I tried running your code in my IDE and I found that it is an infinite loop.
Algorithm complexity is only defined for algorithms, which by (the most often accepted) definition must terminate. When a program doesn't terminate, it is not an algorithm. So it has no "algorithmic time complexity".

Why only highest degree of polynomial for Big Oh? [duplicate]

This question already has answers here:
What is a plain English explanation of "Big O" notation?
(43 answers)
Closed 4 years ago.
Why do we just take the highest degree of polynomial for Big Oh notation. I understand that we can drop the constants as they won't matter for a very high value of 'n'.
But, say an algorithm takes (nlogn + n) time, then why do we ignore 'n' in this case. And the big Oh comes out to be O(nlogn).
Big Oh has to be the upper bound of time taken by the algorithm. So, shouldn't it be (nlogn + n), even for very high values of n?
Because O is asymptotic comparison which answers the question how the function compare for large n. Lower degrees of polynomial become insignificant for function behavior once n is sufficiently large.
One way to see that is: "nlog(n) + n" is smaller than "2nlog(n)". Now you can drop the 2.

Random Shuffling in Java (or any language) Probabilities [duplicate]

This question already has answers here:
What distribution do you get from this broken random shuffle?
(10 answers)
Closed 7 years ago.
So, I'm watching Robert Sedgewick's videos on Coursera and I am currently at the Shuffling one. He's showing a "poorly written" shuffling code for online poker (it has a few other bugs, which I've removed because they are not related to my question) This is how the algorithm works:
for(int i = 0; i < N; i++)
int r = new Random().nextInt(53);
swap(cardArray, i, r);
It iterates all the cards once. At each iteration a random number is generated and the i-th card is swapped with the r-th card. Simple, right?
While I understand the algorithm, I didn't understand his probability calculation. He said that because Random uses a 32-bit seed (or 64, it doesn't seem to matter), this is constrained to only 2^32 different permutations.
He also said that Knuth's algorithm is better (same for loop, but choose a number between 1 and i) because it gives you N! permutations.
I can agree with Knuth's algorithm calculations. But I think that on the first one (which is supposed to be the faulty one) there should be N^N different permutations.
Is Sedgewick wrong or am I missing a fact?
Sedgewick's way of explaining it seems very strange and obtuse to me.
Imagine you had a deck of only 3 cards and applied the algorithm shown.
After the first card was swapped there would be 3 possible outcomes. After the second, 9. And after the 3rd swap, 27. Thus, we know that using the swapping algorithm we will have 27 different possible outcomes, some of which will be duplicate outcomes to the others.
Now, we know for a fact that there are 3 * 2 * 1 = 6 possible arrangements of a 3-card deck. However, 27 is NOT divisible by 6. Therefore, we know for a fact that some arrangements will be more common than others, even without computing what they are. Therefore, the swapping algorithm will not result in an equal probability among the 6 possibilities, i.e., it will be biased towards certain arrangements.
The same exact logic extends to the case of 52 cards.
We can investigate which arrangements are preferred by looking at the distribution of outcomes in the three-card case, which are:
1 2 3 5 occurrences
1 3 2 5 occurrences
2 1 3 4 occurrences
2 3 1 4 occurrences
3 1 2 4 occurrences
3 2 1 5 occurrences
Total 27
Examining these, we notice that combinations which require 0 or 1 swaps have more occurrences than combinations that require 2 swaps. In general, the fewer the number of swaps required for the combination, the more likely it is.
Since the sequence of numbers generated by a random number generator is uniquely determined by the seed, the argument is right - but it applies to Knuth's algorithm as well, and to any other shuffling algorithm: If N! > 2^M (where N is the number of cards and M is the number of bits in the seed), some permutations will never be generated. But even if the seed is big enough, the actual difference between the algorithms lies in the probability distribution: the first algorithm does not produce an uniform probability for the different permutations, while Knuth's does (assuming that the random generator is "random" enough). Note that Knuth's algorithm is also called the Fisher-Yates shuffle.
Sedgwick is right, of course. To get a true random order of cards, you must first use an algorithm that selects equally among the N! possible permutations, which means one that selects one of N, one of N-1, one of N-2, etc., and produces a different result for each combination, such as the Fisher-Yates algorithm.
Secondly, it is necessary to have a PRNG with an internal state of greater than log2(N!) bits, or else it will repeat itself before reaching all combinations. For 52 cards, that's 226 bits. 32 isn't even close.
I'm sorry, but I have to disagree with the answers of Aasmund and Lee Daniel. Every permutation of N elements can be expressed as 3(N - 1) transpositions between 1 and some index i between 1 and N (which is easy to prove by induction on N - see below) Therefore, in order to generate a random permutation it is enough to generate 3(N-1) random integers between 1 and N. In other words, you random generator only needs to be able to generate 3(N-1) different integers.
Theorem
Every permutation of {1, ..., N} can be expressed as the composition of N-1 transpositions
Proof (by induction on N)
CASE N = 1.
The only permutation of {1} is (1) which can be written as the composition of 0 transpositions (the composition of 0 elements is the identity)
CASE N = 2. (Only for who wasn't convinced by the case N = 1 above)
There are two permutations of 2 elements (1,2) and (2,1). Permutation (1,2) is the transposition of 1 with 1. Permutation (2,1) is the transposition of 1 and 2.
INDUCTION N -> Case N + 1.
Take any permutation s of {1, ..., N, N+1}. If s doesn't move N+1, then s is actually a permutation of {1, ..., N} and can be written as the composition of N-1 transpositions between indexes i,j with 1<=i,j<=N.
So let's assume that s moves N+1 to K. Let t the transposition between N+1 and K. Then ts doesn't move N+1 (N+1 -> K -> N+1) and therefore ts can be written as the composition of N-2 transpositions, i.e.,
ts = t1..tN-1.
Hence, s = t1..tN-1t
which consists of N transpositions (one less than N+1).
Corollary
Every permutation of {1, ..., N} can be written as the composition of (at most) 3(N-1) permutations between 1 and i, where 1<=i<=N.
Proof
In view of the Theorem it is enough to show that any transposition between two indexes i and j can be written as the composition of 3 transpositions between 1 and some index. But
swap(i,j) = swap(1,i)swap(1,j)swap(1,i)
where the concatenation of swaps is the composition of these transpositions.

Deciding a Big-O notation for an algorithm

I have questions for my assignment.
I need to decide what is the Big-O characterization for this following algorithm:
I'm guessing the answer for Question 1 is O(n) and Question 2 is O(log n), but I kinda confused
how to state the reason. Are my answers correct? And could you explain the reason why the characterization is like that?
Question 1 : O(n) because it increments by constant (1).
first loop O(n) second loop also O(n)
total O(n) + O(n) = O(n)
Question 2 : O(lg n) it's binary search.
it's O(lg n), because problem halves every time.
if the array is size n at first second is n/2 then n/4 ..... 1.
n/2^i = 1 => n = 2^i => i = log(n) .
Yes, your answers are right.The first one is pretty simple. 2 separate for loops. So effectively its O(n).
The second one is actually tricky. You are actually dividing the input size by 2 (half), that would effectively lead to a time complexity of O(log n).

Categories