How to calculate Big O for this algorithm? - java

I want to find the Big O of method1.
public static void method1(int[] array, int n)
{
for (int index = 0; index < n - 1; index++)
{
int mark = privateMethod1(array, index, n - 1);
int temp = array[index];
array[index] = array[mark];
array[mark] = temp;
} // end for
} // end method1
public static int privateMethod1(int[] array, int first, int last)
{
int min = array[first];
int indexOfMin = first;
for (int index = first + 1; index <= last; index++)
{
if (array[index] < min)
{
min = array[index];
indexOfMin = index;
} // end if
} // end for
return indexOfMin;
} // end privateMethod1
My thinking is that we need not care about privateMethod1, is this true? Do we need not worry about function calls while calculating Big O and just consider other factors like assignment operations in our method1?
Thanks.

Only operations that run in constant time, O(1), can be considered basic operations in your analysis of the running time of your algorithm; in this specific case, finding an upper asymptotic bound for you algorithm (Big-O notation). The number of iterations in the for loop of your method privateMethod1 depends on index in method1 (which itself depends on n) as well as on n, and does clearly not run in constant time.
Hence, we need to include privateMethod1 in our Big-O analysis of your algorithm. We'll treat all other operations, such as assignments and if statements as basic operations.
Treated as basic operations in our analysis:
/* in 'method1' */
int temp = array[index];
array[index] = array[mark];
array[mark] = temp;
/* in 'privateMethod1' */
int min = array[first];
int indexOfMin = first;
//...
if (array[index] < min)
{
min = array[index];
indexOfMin = index;
}
With this cleared up, you can analyse the algorithm using Sigma notation: the outer sum describes the for loop in method1, and the inner loop describes the for loop in privateMethod1, and the 1 generalizes "the cost" of all basic operations in the inner for loop.
Hence, an upper asymptotic bound for your algorithm method1 is O(n^2).

My thinking is that we need not care about privateMethod1,is this true?
No, you are wrong. You need to care about other function calls while calculating complexity. privateMethod1 runs in O(n) time since at worst case fist will be 0 and last is always n - 1. So your overall loop, i.e. method1 runs in O(n ^ 2) time.

Related

Java basic time complexity question i can't fully understand

I am confused about the following method time complexity and could use some help, I am not sure if this counts as O(1) or O(n) when the following for loop receives both the starting point and end point as integer parameters, thanks in advance
private static int f (int[]a, int low, int high)
{
int res = 0;
for (int i=low; i<=high; i++)
res += a[i];
return res;
}
In your case, your input size n = high - low which is exactly the number of iterations your algorithm is executing. Hence the time complexity is O(n)

Not understanding big O notation O(∑ i=0 n−1 (∑ j=i+1 n (j−i)))=O(∑ i=0 n−1 2 (1+n−i)(n−i))=O(n^3)

Working on the following problem:
Given a string s, find the length of the longest substring without repeating characters.
I'm using this brute force solution:
public class Solution {
public int lengthOfLongestSubstring(String s) {
int n = s.length();
int res = 0;
for (int i = 0; i < n; i++) {
for (int j = i; j < n; j++) {
if (checkRepetition(s, i, j)) {
res = Math.max(res, j - i + 1);
}
}
}
return res;
}
private boolean checkRepetition(String s, int start, int end) {
int[] chars = new int[128];
for (int i = start; i <= end; i++) {
char c = s.charAt(i);
chars[c]++;
if (chars[c] > 1) {
return false;
}
}
return true;
}
}
Tbe big O notation is as follows:
I understand that three nested iterations would result in a time complexity O(n^3).
I only see two sigma operators being used on the start of the formula, could someone enlighten me on where the third iteration comes to play in the beginning of the formula?
The first sum from i=0 to n-1 corresponds to the outer for loop of lengthOfLongestSubstring, which you can see iterates from i=0 to n-1.
The second sum from j = i+1 to n corresponds to the second for loop (you could be starting j at i+1 rather than i as there's no need to check length 0 sub-strings).
Generally, we would expect this particular double for loop structure to produce O(n^2) algorithms and a third for loop (from k=j+1 to n) to lead to O(n^3) ones. However, this general rule (k for loops iterating through all k-tuples of indices producing O(n^k) algorithms) is only the case when the work done inside the innermost for loop is constant. This is because having k for loops structured in this way produces O(n^k) total iterations, but you need to multiply the total number of iterations by the work done in each iteration to get the overall complexity.
From this idea, we can see that the reason lengthOfLongestSubstring is O(n^3) is because the work done inside of the body of the second for loop is not constant, but rather is O(n). checkRepitition(s, i, j) iterates from i to j, taking j-i time (hence the expression inside the second term of the sum). O(j-i) time is O(n) time in the worst case because i could be as low as 0, j as high as n, and of course O(n-0) = O(n) (it's not too hard to show that checkRepitions is O(n) in the average case as well).
As mentioned by a commenter, having a linear operation inside the body of your second for loop has the same practical effect in terms of complexity as having a third for loop, which would probably be easier to see as being O(n^3) (you could even imagine the function definition for checkRepitition, including its for loop, being pasted into lengthOfLongestSubstring in place to see the same result). But the basic idea is that doing O(n) work for each of the O(n^2) iterations of the 2 for loops means the total complexity is O(n)*O(n^2) = O(n^3).

Finding big O recursion

I'm trying to figure out what is Big O and big Omega from the following piece of code down below.
This code inputs an array of ints, and sorts them in ascending order.
The worst case would be all in descending order {5,4,3,2,1}
,and the best case would be ascending order {1,2,3,4,5}.
static int counter = 0;
static int counter1 = 0;
static int counter2 = 0;
public static int[] MyAlgorithm(int[]a) {
int n = a.length;
boolean done = true;
int j = 0;
while(j<=n-2) {
counter++;
if(a[j]>a[j+1]) {
int temp = a[j];
a[j] = a[j+1];
a[j+1] = temp;
done = false;
}
j = j+1;
}
j = n-1;
while(j>=1) {
counter1++;
if(a[j]<a[j-1]) {
int temp = a[j-1];
a[j-1] = a[j];
a[j] = temp;
done = false;
}
j = j-1;
}
if(!done) {
counter2++;
MyAlgorithm(a);
}
return a;
}
Worst case for each while loop i got was n-1, and for the recursion it was n/2.
Best case is n-1 while loops, and zero recursion
So my big Omega is (n) ( no recursion )
but for Big O, here is the confusing part for me, since there are n/2 recursion calls, does this mean i do N X N (because of n/2 recursion) big O (n^2)? or does it stay big O(n)???
As you said the Omega is Omega(n). In case all numbers in the array a are already in sorted order the code iterates over the array twice, once per while loop. This are n steps O(1) times.
In the worst case you are correct in assuming O(n^2). As you saw, an array sorted in reverse order produces such a worst case scenario. We can also produce a worst case scenario by having a sorted array in increasing order and then only swap the first and last number. Then each run of MyAlgorithm moves that last/first number two positions. After n/2 steps (runs of MyAlgorithm) the numbers reach their final position. Hence, O(n/2 * n) = O(n^2).
Small side note, sorting in general is in O(n log n), so you can sort something only under some circumstances in O(n).

Time complexity(Java, Quicksort)

I have a very general question about calculating time complexity(Big O notation). when people say that the worst time complexity for QuickSort is O(n^2) (picking the first element of the array to be the pivot every time, and array is inversely sorted), which operation do they account for to get O(n^2)? Do people count the comparisons made by the if/else statements? Or do they only count the total number of swaps it makes? Generally how do you know which "steps" to count to calculate Big O notation.
I know this is a very basic question but I've read almost all the articles on google but still haven't figured it out
Worst cases of Quick Sort
Worst case of Quick Sort is when array is inversely sorted, sorted normally and all elements are equal.
Understand Big-Oh
Having said that, let us first understand what Big-Oh of something means.
When we have only and asymptotic upper bound, we use O-notation. For a given function g(n), we denote by O(g(n)) the set of functions,
O(g(n)) = { f(n) : there exist positive c and no,
such that 0<= f(n) <= cg(n) for all n >= no}
How do we calculate Big-Oh?
Big-Oh basically means how program's complexity increases with the input size.
Here is the code:
import java.util.*;
class QuickSort
{
static int partition(int A[],int p,int r)
{
int x = A[r];
int i=p-1;
for(int j=p;j<=r-1;j++)
{
if(A[j]<=x)
{
i++;
int t = A[i];
A[i] = A[j];
A[j] = t;
}
}
int temp = A[i+1];
A[i+1] = A[r];
A[r] = temp;
return i+1;
}
static void quickSort(int A[],int p,int r)
{
if(p<r)
{
int q = partition(A,p,r);
quickSort(A,p,q-1);
quickSort(A,q+1,r);
}
}
public static void main(String[] args) {
int A[] = {5,9,2,7,6,3,8,4,1,0};
quickSort(A,0,9);
Arrays.stream(A).forEach(System.out::println);
}
}
Take into consideration the following statements:
Block 1:
int x = A[r];
int i=p-1;
Block 2:
if(A[j]<=x)
{
i++;
int t = A[i];
A[i] = A[j];
A[j] = t;
}
Block 3:
int temp = A[i+1];
A[i+1] = A[r];
A[r] = temp;
return i+1;
Block 4:
if(p<r)
{
int q = partition(A,p,r);
quickSort(A,p,q-1);
quickSort(A,q+1,r);
}
Assuming each statements take a constant time c. Let's calculate how many times each block is calculated.
The first block is executed 2c times.
The second block is executed 5c times.
The thirst block is executed 4c times.
We write this as O(1) which implies the number of times statement is executed same number of times even when size of input varies. all 2c, 5c and 4c all are O(1).
But, when we add the loop over second block
for(int j=p;j<=r-1;j++)
{
if(A[j]<=x)
{
i++;
int t = A[i];
A[i] = A[j];
A[j] = t;
}
}
It runs for n times (assuming r-p is equal to n, size of the input) i.e., nO(1) times i.e., O(n). But this doesn't run n times everytime. Hence, we have the average case O(log n) i.e, at least log(n) elements are traversed.
We now established that the partition runs O(n) or O(log n). The last block, which is quickSort method, definetly runs in O(n). We can think of it as an enclosing for loop which runs n times. Hence the entire complexity is either O(n2) or O(nlog n).
It is counted mainly on the size (n) that can grow, so for quicksort an array it is the size of the array. How many times do you need to access each elements of the array? if you only need to access each element once then it is a O(n) and so on..
Temp variables / local variables that is growing as the n grows will be counted.
Other variables that is not growing significantly when n grows can be count as constant: O(n) + c = O(n)
Just to add to what others have said, I agree with those who said you count everything, but if I recall correctly from my algorithm classes in college, the swapping overhead is usually minimal compared with the comparison times and in some cases is 0 (if the list in question is already sorted).
For example. the formula for a linear search is
T= K * N / 2.
where T is the total time; K is some constant defining the total computation time; and N is the number of elements in the list.
ON average, the number of comparisons is N/2.
BUT we can rewrite this to the following:
T = (K/2) * N
or redefining K,
T = K * N.
This makes it clear that the time is directly proportional to the size of N, which is what we really care about. As N increases significantly, it becomes the only thing that really matters.
A binary search on the other hand, grows logarithmically (O log(N)).

Finding smallest element in an integer array in java using divide and conquor algorithm

I tried to find the smallest element in an integer array using what i understood about divide and conquor algorithm.
I am getting correct results.
But i am not sure if it is a conventional way of using divide and conquor algorithm.
If there is any other smarter way of implementing divide and conquor algorithm than what i have tried then please let me know it.
public static int smallest(int[] array){
int i = 0;
int array1[] = new int[array.length/2];
int array2[] = new int[array.length - (array.length/2)];
for(int index = 0; index < array.length/2 ; index++){
array1[index] = array[index];
}
for(int index = array.length/2; index < array.length; index++){
array2[i] = array[index];
i++;
}
if(array.length > 1){
if(smallest(array1) < smallest(array2)){
return smallest(array1);
}else{
return smallest(array2);
}
}
return array[0];
}
Your code is correct, but You can write less code using existing functions like Arrays.copyOfRange and Math.min
public static int smallest(int[] array) {
if (array.length == 1) {
return array[0];
}
int array1[] = Arrays.copyOfRange(array, 0, array.length / 2);
int array2[] = Arrays.copyOfRange(array, array.length / 2, array.length);
return Math.min(smallest(array1), smallest(array2));
}
Another point. Testing for the length == 1 at the beginning is more readable version. Functionally it is identical. From a performance point of view it creates less arrays, exiting as soon as possible from the smallest function.
It is also possible to use a different form of recursion where it is not necessary to create new arrays.
private static int smallest(int[] array, int from, int to) {
if (from == to) {
return array[from];
}
int middle = from + (to - from) / 2;
return Math.min(smallest(array, from, middle), smallest(array, middle + 1, to));
}
public static int smallest(int[] array){
return smallest(array, 0, array.length - 1);
}
This second version is more efficient because it doesn't creates new arrays.
I don't find any use in using a divide and conquer in this paticular program.
Anyhow you search for the whole array from 1 to N, but in two steps
1. 1 to N / 2
2. N / 2 + 1 to N
This is equivalent to 1 to N.
Also you program check for few additional checks after the loops which aren't actually required when you do it directly.
int min = a[0];
for(int i = 1; i < arr.length; i++)
if(min < a[i])
a[i] = min;
This is considered most efficient in finding out the minimum value.
When do I use divide and conquer
A divide and conquer algorithm works by recursively breaking down a problem into two or more sub-problems, until these become simple enough to be solved directly.
Consider the Merge Sort Algorithm.
Here, we divide the problem step by step untill we get smaller problem and then we combine them to sort them. In this case this is considered optimal. The normal runs in a O(n * n) and this runs in O(n log n).
But in finding the minimum the original has O(n). So this is good.
Divide And Conquer
The book
Data Structures and Algorithm Analysis in Java, 2nd edtition, Mark Allen Weiss
Says that a D&C algorithm should have two disjoint recursive calls. I.e like QuickSort. The above algorithm does not have this, even if it can be implemented recursively.
What you did here with code is correct. But there are more efficient ways of solving this code, of which i'm sure you're aware of.
Although divide and conquer algorithm can be applied to this problem, but it is more suited for complex data problem or to understand a difficult data problem by dividing it into smaller fragments. One prime example would be 'Tower of Hanoi'.
As far as your code is concerned, it is correct. Here's another copy of same code-
public class SmallestInteger {
public static void main(String[] args) {
int small ;
int array[] = {4,-2,8,3,56,34,67,84} ;
small = smallest(array) ;
System.out.println("The smallest integers is = " + small) ;
}
public static int smallest(int[] array) {
int array1[] = new int[array.length/2];
int array2[] = new int[array.length - (array.length/2)];
for (int index = 0; index < array.length/2 ; index++) {
array1[index] = array[index];
}
for (int index = array.length/2; index < array.length; index++) {
array2[index - array.length/2] = array[index] ;
}
if (array.length > 1) {
if(smallest(array1) < smallest(array2)) {
return smallest(array1) ;
}
else {
return smallest(array2) ;
}
}
return array[0] ;
}
}
Result came out to be-
The smallest integers is = -2

Categories