Time Complexity for the given code. - java

Below code is to find how many times a number is shown in an array.
For Example:
1,2,2,2,3,4,5,5,5,5
number 2 = 3 times
number 5 = 4 times.
What is the time complexity in Java for the below code?
What is the best way to solve this problem in respect to Time complexity?
public static void main(String[]args)
{
int[] data = {1,1,2,3,4,4,4,5,6,7,8,8,8,8};
System.out.println(count(data,8));
}
public static int count(int[] a, int x)
{
int count=0;
int index=0;
while(index<a.length)
{
if(a[index]==x)
{
count++;
}
index++;
}
return count;
}

is it o(n) ,logN or else?
You look at every element once so it is O(n)
if it is o(n), can I can i do this task with LogN complicity?
Do a binary search with a low and high value searching for the value just less than the one you search for and just above. The difference between these two result will tell you how many there are. This is an O(Log N) search.

while(index<a.length)
This loop is run once per value in data. This is O(n). If you want to do this in O(log n) you will need a sorted array and a binary search. You have a sorted array, so you would just need to do a binary search.

The answer is O(n).
You loop through the array looking once at every index.
e.g. array size is 10 --> 10 comparisons, array size is 100 --> 100 comparisons and so on.

You examine each element of the array, and therefore your code has O(n) time complexity.
To do it in O(log n) time, you have to exploit the fact that the array is sorted. This can be done with a variant of binary search. Since this looks like homework, I'll let you think about it for a bit before offering any more hints.

O(n)
This code uses each element of the array.
while(index<a.length)
You can replace it on
for(int index = 0; index < a.length; i++)

O(n).When the code iterates through each and every element of the array,its 0(n).

Related

Get the Sum of all positive numbers in a Circularly ordered array in O(Log N)

I've been given an exercise in class that requires the following:
An array v formed by N integers is circularly ordered if, either the array is ordered, or else v[N‐1] ≤ v[0] and ∃k with 0<k<N such as ∀i≠k v[i] ≤ v[i+1].
Example:
Given a circularly ordered array with as much as 10 positive items, calculate the sum of the positive values. For this last example the answer would be 27.
I've been required to implement it using a Divide-and-Conquer scheme in java, given that the complexity is in the worst case O(Log N), being N the array size.
So far I tried to pivot a value until I find a positive value, then knowing the other positive values are adjacent, it's possible to sum the maximum of 10 positive values with a O(1) complexity.
I thought of doing a binary search to achieve O(Log N) complexity, but this would not follow the divide and conquer pattern.
I'm easily able to implement it through a O(N) complexity like this:
public static int addPositives(int[] vector){
return addPositives(vector,0,vector.length-1
}
public static int addPositives(int[] vector, int i0, int iN){
int k = (i0+iN)/2;
if (iN-i0 > 1){
return addPositives(vector,i0,k) + addPositives(vector,k+1,iN);
}else{
int temp = 0;
for (int i = i0; i <= iN; i++) {
if (vector[i]>0) temp+=vector[i];
}
return temp;
}
}
However trying to land the O(Log N) gets me nowhere, how could I achieve it?
You can improve your divide and conquer implementation to meet the required running time if you prune irrelevant branches of the recursion.
After you divide the current array into two sub-arrays, compare the first and last elements of each sub-array. If both are negative and the first is smaller than the last, you know for sure that all the elements in this sub-array are negative and you don't have to make the recursive call on it (since you know it will contribute 0 to the total sum).
You can also stop the recursion if all the elements in a sub-array are positive (which can also be verified by comparing the first and last elements of the sub-array) - in that case you have to sum all the elements of that sub-array, so there's no point to continue the recursion.
My advice for the O(Log N) would be a direct comparison to meet the second of the two criteria: the last item being less than the first.
return vector[0] >= vector[iN-1]
If you want something with greater complexity, I forget the algorithm name, but you could get the array at the halfway point, and do two ordered searches from there: from the mid to the start and then the mid to the end

Choosing random Pivot in QuickSort partitioning takes more time, how is this possible?

public static int partitionsimple_hoare(int[] arr,int l , int h){
int pivot = arr[l];
int i = l-1;
int j = h+1;
while(true){
do{
i++;
}while(arr[i]<pivot);
do{
j--;
}while(arr[j]>pivot);
if(i<j){
swap(arr,i,j);
}
else break;
}
return j;
}
This is the basic implementation of Quicksort partition which I employed.
but when I replace the first line (choosing pivot), with :
int pivot = arr[randomPivot(l,h)];
where, the implementation is this:
public static int randomPivot(int l, int r){
int x =(int)( Math.random()*(r-l+1));
return x+l;
}
It surprisingly takes a lot more time. (i measured time between the sort call with System.nanoTime();
This isn't supposed to happen. Is it due to Java's Math.random() possibly taking more time than it should've ideally taken?
The worst case performance will not be O(n^2) with the randomized Pivot selection, it'll be O(n logn), thus taking much less time.
The Quick Sort Algorithm Whenever we choose the first element or the last element as the pivot element then the Best Case Time complexity is O(nlogn) and the worst case time complexity is O(n^2) and has an Average Time Complexity of O(nlogn) also
However when the Pivot element is chosen at Random which is mainly done to avoid the Worst Case, the Time Complexity will be O(nlogn) EXCEPT in one case where this also lead to the Worst Case Time Complexity of O(n^2) WHEN all the elements in the INPUT are IDENTICAL. Also we can note here that Since Quick Sort is an Unstable Sorting Algorithm(Refer to the Link just in case) therefore if in the input we have two or more elements which are identical then the time taken for Sorting will also Increase gradually. So, Please check the Input taken once.
Now coming to the code, Please Refer here for the code of the Quick Sort algorithm when the First element is chosen as the pivot element just in case you have made some mistake.
And finally, Here is the code for Randomized Quick Sort in C but there is not much difference in the Code for C and JAVA in this case. Also I would Suggest that you use System.nanoTime(); for measuring the Time when the inputs are identical in the case when the pivot element is the first element and when the pivot element is chosen randomly.
I Hope this helped. Thank you

Finding mean and median in constant time

This is a common interview question.
You have a stream of numbers coming in (let's say more than a million). The numbers are between [0-999]).
Implement a class which supports three methods in O(1)
* insert(int i);
* getMean();
* getMedian();
This is my code.
public class FindAverage {
private int[] store;
private long size;
private long total;
private int highestIndex;
private int lowestIndex;
public FindAverage() {
store = new int[1000];
size = 0;
total = 0;
highestIndex = Integer.MIN_VALUE;
lowestIndex = Integer.MAX_VALUE;
}
public void insert(int item) throws OutOfRangeException {
if(item < 0 || item > 999){
throw new OutOfRangeException();
}
store[item] ++;
size ++;
total += item;
highestIndex = Integer.max(highestIndex, item);
lowestIndex = Integer.min(lowestIndex, item);
}
public float getMean(){
return (float)total/size;
}
public float getMedian(){
}
}
I can't seem to think of a way to get the median in O(1) time.
Any help appreciated.
You have already done all the heavy lifting, by building the store counters. Together with the size value, it's easy enough.
You simply start iterating the store, summing up the counts until you reach half of size. That is your median value, if size is odd. For even size, you'll grab the two surrounding values and get their average.
Performance is O(1000/2) on average, which means O(1), since it doesn't depend on n, i.e. performance is unchanged even if n reaches into the billions.
Remember, O(1) doesn't mean instant, or even fast. As Wikipedia says it:
An algorithm is said to be constant time (also written as O(1) time) if the value of T(n) is bounded by a value that does not depend on the size of the input.
In your case, that bound is 1000.
The possible values that you can read are quite limited - just 1000. So you can think of implementing something like a counting sort - each time a number is input you increase the counter for that value.
To implement the median in constant time, you will need two numbers - the median index(i.e. the value of the median) and the number of values you've read and that are on the left(or right) of the median. I will just stop here hoping you will be able to figure out how to continue on your own.
EDIT(as pointed out in the comments): you already have the array with the sorted elements(stored) and you know the number of elements to the left of the median(size/2). You only need to glue the logic together. I would like to point out that if you use linear additional memory you won't need to iterate over the whole array on each insert.
For the general case, where range of elements is unlimited, such data structure does not exist based on any comparisons based algorithm, as it will allow O(n) sorting.
Proof: Assume such DS exist, let it be D.
Let A be input array for sorting. (Assume A.size() is even for simplicity, that can be relaxed pretty easily by adding a garbage element and discarding it later).
sort(A):
ds = new D()
for each x in A:
ds.add(x)
m1 = min(A) - 1
m2 = max(A) + 1
for (i=0; i < A.size(); i++):
ds.add(m1)
# at this point, ds.median() is smallest element in A
for (i = 0; i < A.size(); i++):
yield ds.median()
# Each two insertions advances median by 1
ds.add(m2)
ds.add(m2)
Claim 1: This algorithm runs in O(n).
Proof: Since we have constant operations of add() and median(), each of them is O(1) per iteration, and the number of iterations is linear - the complexity is linear.
Claim 2: The output is sorted(A).
Proof (guidelines): After inserting n times m1, the median is the smallest element in A. Each two insertions after it advances the median by one item, and since the advance is sorted, the total output is sorted.
Since the above algorithm sorts in O(n), and not possible under comparisons model, such DS does not exist.
QED.

Complexity of sorting algorithm

Here is a sorting algorithm, not a clever one. In this version, it works well when elements are non-negative and occur at most once. I'm confused about its time complexity. Is it O(n)? So is it better than quick sort in terms of that notation? Thanks. Here is the code:
public int[] stupidSort( int[] array ){
// Variables
int max = array[0];
int index = 0;
int[] lastArray = new int[array.length];
// Find max element in input array
for( int i = 0; i < array.length; i++ ){
if ( array[i] > max )
max = array[i];
}
// Create a new array. In this array, element n will represent number of n's in input array
int[] newArray = new int[max + 1];
for ( int j = 0; j < array.length; j++ )
newArray[array[j]]++;
// If element is bigger than 0, it means that number occured in input. So put it in output array
for( int k = 0; k < newArray.length; k++ ){
if( newArray[k] > 0 )
lastArray[index++] = k;
}
return lastArray;
}
What you wrote is the counting sort, and it has O(n) complexity indeed. However, it cannot be compared to QuickSort because QuickSort is an algorithm based on comparisons. These 2 algorithms belong to different categories (yours is a non-comparison, quicksort is a comparison algorithm). Your algorithm (counting sort) makes the assumption that the range of numbers in the array is known and that all numbers are integer, whereas QuickSort works for every number.
You can learn more for sorting algorithms here. In that link you can see the complexity for sorting algorithms divided in the 2 categories: comparison and non-comparison.
EDIT
As Paul Hankin pointed out the complexity isn't always O(n). It is O(n+k) where k is the max of the input array. Quoted below is the time complexity as explained in the wikipedia article for the counting sort:
Because the algorithm uses only simple for loops, without recursion or subroutine calls, it is straightforward to analyze. The initialization of the count array, and the second for loop which performs a prefix sum on the count array, each iterate at most k + 1 times and therefore take O(k) time. The other two for loops, and the initialization of the output array, each take O(n) time. Therefore, the time for the whole algorithm is the sum of the times for these steps, O(n + k).
The given algorithm is very much similar to Count sort. Whereas QuickSort is a comparison model based sorting algorithm. Only in the worst case QuickSort gives O(n^2) time complexity, otherwise it is O(nlogn). Also the QuickSort is used with it's randomized version that is pivot is selected randomly and hence the worst-case is avoided most often this way.
Edit: Correction as pointed by paulhankin in comment complexity=O(n+k):
The code you have put forth is using counting based sort, that is count sort and your code's time complexity is O(n+k). But what you must realize is that this algorithm is dependent on the range of the input and that range could be anything. Furthermore this algorithm is not InPlace but Stable. In many cases the data you want to sort is not only integer rather the data can be anything with a key that is required to be sorted with the help of the key. If Stable algorithm is not used than in such a case sorting can be problematic.
Just in case if someone does not know:
InPlace Algorithm: Is the one in which additional space required is not dependent on the given input.
Stable Algorithm: Is the one in which for example if there were two 5's in the data set before sorting, the 5 that came first before sorting comes first than the second even after the sorting.
EDIT: (Regarding aladinsane7's comment): Yes the formal version of countSort does handle this aspect also. It would be good if you have a look at CountSort. And its time complexity is O(n+k). Where K accounts for the range of data and n is complexity for remaining algorithm.

Fastest strategy to form and sort an array of positive integers

In Java, what is faster: to create, fill in and then sort an array of ints like below
int[] a = int[1000];
for (int i = 0; i < a.length; i++) {
// not sure about the syntax
a[i] = Maths.rand(1, 500) // generate some random positive number less than 500
}
a.sort(); // (which algorithm is best?)
or insert-sort on the fly
int[] a = int[1000];
for (int i = 0; i < a.length; i++) {
// not sure about the syntax
int v = Maths.rand(1, 500) // generate some random positive number less than 500
int p = findPosition(a, v); // where to insert
if (a[p] == 0) a[p] = v;
else {
shift a by 1 to the right
a[p] = v;
}
}
There are many ways that you could do this:
Build the array and sort as you go. This is likely to be very slow, since the time required to move array elements over to make space for the new element will almost certainly dominate the sorting time. You should expect this to take at best Ω(n2) time, where n is the number of elements that you want to put into the array, regardless of the algorithm used. Doing insertion sort on-the-fly will take expected O(n2) time here.
Build the array unsorted, then sort it. This is probably going to be extremely fast if you use a good sorting algorithm, such as quicksort or radix sort. You should expect this to take O(n log n) time (for quicksort) or O(n lg U) time (for radix sort), where n is the number of values and U is the largest value.
Add the numbers incrementally to a priority queue, then dequeue all elements from the priority queue. Depending on how you implement the priority queue, this could be very fast. For example, using a binary heap here would cause this process to take O(n log n) time, and using a van Emde Boas tree would take O(n lg lg U) time, where U is the largest number that you are storing. That said, the constant factors here would likely make this approach slower than just sorting the values.
Hope this helps!

Categories