I created this method in java that indicates whether an integer array is sorted or not. What is its complexity? I think if good is O(1) in the worst case is O(n) in the average case?
static boolean order(int[] a){
for(int i=0;i<a.length-1;i++){
if(a[i]>a[i+1]) return false;
}
return true;
}
You didn't tell anything about your input. So suppose it's totally random. So for any 2 neighbour pairs we have 50% chance that they are ordered. It means that we have probability 1 of making 1 step, 0.5 for 2 steps, 0.25 for 3 steps and generally 2^(-k) for k steps. Let's calculate expected number of steps:
I don't know how to calculate sum of this series so I used wolfram alpha and got answer: 2, so it's a constant.
So as I understand average case for random input is O(1).
I'm not sure it is correct way to calculate average complexity but seems fine to me.
Complexity is usually quoted in worst case, which in your case is O(n).
Related
Given an unsorted array 𝐴 of n integers and an integer x, rearrange the elements in 𝐴 such that all elements less than or equal to x come before any elements larger than x.
Note : Don't have to include integer x in the new array.
What is the running time complexity of your algorithm? Explain your answer.
To attempt this, you first need to understand sorting algorithms, Big O notation, then see which sorting algorithm best fits the question asked.
1) Given your problems defines that some values come before a set point, and some after, a merge sort would be best here. See this.
2) Have a read about Big O notation, and time complexity of certain sorting algorithms. There is always a best case and worst case. You can also calculate the complexity of any algorithm you design using this notation. See below.
Calculating complexity of an algorithm
EDIT:
To help, here is a solution.
Part 1:
function sortHalf(List listToSort, value x):
List firstHalf;
List secondHalf;
for (Integer i in listToSort):
if i less than x then firstHalf.add(i);
else if i greater than x then secondHalf.add(i);
loop
List finalList;
finalList.addAll(firstHalf);
finalList.addAll(secondHalf);
return finalList;
end
Part 2:
The above algorithm would have a time complexity of O(n), where n is the number of elements in the listToSort. Best case would be O(1), where there is 1 element, and worst case is O(n)
So I'm trying to find out what Big-O is for a Rabin-Miller test and I've done some research on it but I can't really find a good explanation for it. The thing that confuses me the most is this part:
while( !isPrime(n)){
n = new BigInteger(bit, new Random());
}
This is a piece of my main program where I keep generating a new number until I find a prime and then it exits the loop. How can I estimate Big-O when I don't know how many times the while loop will run?
Well when calculating the Big-O, there are three things to do
find the best case
find the worst case
and
find the average case.
In this instance, the worst case Big-O would clearly be O(infinity), which will be achieved in the highly unlikely case that n is initially not a prime number, and all the newly calculated instances of n are also never going to be prime numbers.
The best case Big-O would be the same as the Big-O of your isPrime() method. This is because the best case would be when n is initially a prime number, which will cause the while loop to never be executed at all. One thing to note is that your while-loop condition does two things : check if a boolean value is true, and calls the isPrime() method. So to find the Big-O, one must multiply the Big-O of the isPrime() method by the Big-O of the boolean being checked. The Big-O of the boolean being checked is O(1). Therefore, your best case will be the same as the Big-O of your isPrime() method, as 1*x = x. I do not know how wrote your isPrime() method, so cannot tell you the best-case Big-O.
The average case however, is harder to find here because you're dealing with random numbers. Since we deal with random numbers, the average case can be calculated using something called expected analysis In order to do this however, we need to know the range of the random numbers. The java api says that the bigInteger constructor that you're using calculates random numbers within the range of 0 and 2^bit-1, inclusive.(Positive numbers only). Since I don't know what the value of bit in your code, I cannot give you the average case, but hope that you'll be able to calculate that yourself.
If you have any questions just ask!
public static void Comp(int n)
{
int count=0;
for(int i=0;i<n;i++)
{
for(int j=0;j<n;j++)
{
for(int k=1;k<n;k*=2)
{
count++;
}
}
}
System.out.println(count);
}
Does anyone knows what the time complexity is?
And what is the Big Oh()
Please can u explain this to me, step by step?
Whoever gave you this problem is almost certainly looking for the answer n^2 log(n), for reasons explained by others.
However the question doesn't really make any sense. If n > 2^30, k will overflow, making the inner loop infinite.
Even if we treat this problem as being completely theoretical, and assume n, k and count aren't Java ints, but some theoretical integer type, the answer n^2 log n assumes that the operations ++ and *= have constant time complexity, no matter how many bits are needed to represent the integers. This assumption isn't really valid.
Update
It has been pointed out to me in the comments below that, based on the way the hardware works, it is reasonable to assume that ++, *=2 and < all have constant time complexity, no matter how many bits are required. This invalidates the third paragraph of my answer.
In theory this is O(n^2 * log(n)).
Each of two outer loops is O(n) and the inner one is O(log(n)), because log base 2 of n is the number of times which you have to divide n by 2 to get 1.
Also this is a strict bound, i.e the code is also Θ(n^2 * log(n))
The time complexity is O(n^2 log n). Why? each for-loop is a function of n. And you have to multiply by n for each for loop; except the inner loop which grows as log n. why? for each iteration k is multiplied by 2. Think of merge sort or binary search trees.
details
for the first two loops: summation of 1 from 0 to n, which is n+1 and so the first two loops give (n+1)*(n+1)= n^2+2n+1= O(n^2)
for the k loop, we have k growing as 1,2,4,8,16,32,... so that 2^k = n. Take the log of both sides and you get k=log n
Again, not clear?
So if we set m=0, and a=2 then we get -2^n/-1 why is a=2? because that is the a value for which the series yields 2,4,8,16,...2^k
I have a big doubt in calculating time complexity. Is it calculated based on number of times loop executes? My question stems from the situation below.
I have a class A, which has a String attribute.
class A{
String name;
}
Now, I have a list of class A instances. This list has different names in it. I need to check whether the name "Pavan" exist in any of the objects in the list.
Scenario 1:
Here the for loop executes listA.size times, which can be said as O(n)
public boolean checkName(List<String> listA, String inputName){
for(String a : listA){
if(a.name.equals(inputName)){
return true;
}
}
return false;
}
Scenario 2:
Here the for loop executes listA.size/2 + 1 times.
public boolean checkName(List<String> listA, String inputName){
int length = listA.size/2
length = length%2==0 ? length : length + 1
for(int i=0; i < length ; i++){
if(listA[i].name.equals(inputName) || listA[listA.size - i - 1].name.equals(inputName)){
return true;
}
}
return false;
}
I minimized the number of times for loop executes, but I increased the complexity of the logic.
Can we say this is O(n/2)? If so, can you please explain me?
First note that in Big-O notation there is nothing such as O(n/2) as 1/2 is a constant factor which is ignored in this notation. The complexity would remain as O(n). So by modifying your code you haven't changed anything regarding complexity.
In general estimating the number of times a loop is executed with respect to input size and the operation that actually is associated with a cost in time is the way to get to the complexity class of the algorithm.
The operation that is producing cost in your method is String.equals, which by looking at it's implementation, is producing cost by comparing characters.
In your example the input size is not strictly equal to the size of the list. It also depends on how large the strings contained in that list are and how large the inputName is.
So let's say the largest string in the list is m1 characters and the inputName is m2 characters in length. So for your original checkName method the complexity would be O(n*min(m1,m2)) because of String.equals comparing at most all characters of a string.
For most applications the term min(m1,m2) doesn't matter as either one of the compared strings is stored in a fixed size database column for example and therefore this expression is a constant, which is, as said above, ignored.
No. In big O expression, all constant values are ignored.
We only care about n, such as O(n^2), O(logn).
Time and space complexity is calculated based on the number or operations executed, respectively the number the units of memory used.
Regarding time complexity: all the operations are taken into account and numbered. Because it's hard to compare say O(2*n^2+5*n+3) with O(3*n^2-3*n+1), equivalence classes are used. That means that for very large values of n, the two previous example will have a roughly similar value (more exactly said: they have a similar rate of grouth). Therefor, you reduce the expression to it's most basic form, saying that the two example are in the same equivalence class as O(n^2). Similarly, O(n) and O(n/2) are in the same class and therefor both are in O(n).
Due to what I said before, you can ignore most constant operations (such as .size(), .lenth() on collections, assignments, etc) as they don't really count in the end. Therefor, you're left with loop operations and sometimes complex computations (that somewhere lower on the stack use loops themselves).
To better get an understanding on the 3 classes of complexity, try reading articles on the subject, such as: http://discrete.gr/complexity/
Time complexity is a measure for theoretical time it will take for an operation to be executed.
While normally any improvement in the time required is significant in time complexity we are interested in the order of magnitude. The former means
If an operation for N objects requires N time intervals then it has complexity O(N).
If an operation for N objects requires N/2 it's complexity is still O(N) though.
The above paradox is explained if you get to calculate the operation for large N then there is no big difference in the /2 part as for the N part. If complexity is O(N^2) then O(N) is negligible for large N so that's why we are interested in order of magnitude.
In other words any constant is thrown away when calculating complexity.
As for the question if
Is it calculated based on number of times loop executes ?
well it depends on what a loop contains. But if only basic operation are executed inside a loop then yes. To give an example if you have a loop inside which an eigenanaluysis is executed in each run, which has complexity O(N^3) you cannot say that your complexity is simply O(N).
Complexity of an algorithm is measured based on the response made on the input size in terms of processing time or space requirement. I think you are missing the fact that the notations used to express the complexity are asymptotic notations.
As per your question, you have reduced the loop execution count, but not the linear relation with the input size.
This question already has answers here:
Computational complexity of Fibonacci Sequence
(12 answers)
Closed 6 years ago.
So, i've got a recursive method in Java for getting the 'n'th fibonacci number - The only question i have, is: what's the time complexity? I think it's O(2^n), but i may be mistaken? (I know that iterative is way better, but it's an exercise)
public int fibonacciRecursive(int n)
{
if(n == 1 || n == 2) return 1;
else return fibonacciRecursive(n-2) + fibonacciRecursive(n-1);
}
Your recursive code has exponential runtime. But I don't think the base is 2, but probably the golden ratio (about 1.62). But of course O(1.62^n) is automatically O(2^n) too.
The runtime can be calculated recursively:
t(1)=1
t(2)=1
t(n)=t(n-1)+t(n-2)+1
This is very similar to the recursive definition of the fibonacci numbers themselves. The +1 in the recursive equation is probably irrelevant for large n. S I believe that it grows approximately as fast as the fibo numbers, and those grow exponentially with the golden ratio as base.
You can speed it up using memoization, i.e. caching already calculated results. Then it has O(n) runtime just like the iterative version.
Your iterative code has a runtime of O(n)
You have a simple loop with O(n) steps and constant time for each iteration.
You can use this
to calculate Fn in O(log n)
Each function call does exactly one addition, or returns 1. The base cases only return the value one, so the total number of additions is fib(n)-1. The total number of function calls is therefore 2*fib(n)-1, so the time complexity is Θ(fib(N)) = Θ(phi^N), which is bounded by O(2^N).
O(2^n)? I see only O(n) here.
I wonder why you'd continue to calculate and re-calculate these? Wouldn't caching the ones you have be a good idea, as long as the memory requirements didn't become too odious?
Since they aren't changing, I'd generate a table and do lookups if speed mattered to me.
It's easy to see (and to prove by induction) that the total number of calls to fibonacciRecursive is exactly equal to the final value returned. That is indeed exponential in the input number.